Sample records for software effort estimation

  1. Estimating Software Effort Hours for Major Defense Acquisition Programs

    ERIC Educational Resources Information Center

    Wallshein, Corinne C.

    2010-01-01

    Software Cost Estimation (SCE) uses labor hours or effort required to conceptualize, develop, integrate, test, field, or maintain program components. Department of Defense (DoD) SCE can use initial software data parameters to project effort hours for large, software-intensive programs for contractors reporting the top levels of process maturity,…

  2. Software Transition Project Retrospectives and the Application of SEL Effort Estimation Model and Boehm's COCOMO to Complex Software Transition Projects

    NASA Technical Reports Server (NTRS)

    McNeill, Justin

    1995-01-01

    The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.

  3. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  4. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  5. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    NASA Astrophysics Data System (ADS)

    Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.

  6. FPA Depot - Web Application

    NASA Technical Reports Server (NTRS)

    Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam

    2011-01-01

    Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.

  7. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    ERIC Educational Resources Information Center

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  8. Software Effort Estimation Accuracy: A Comparative Study of Estimations Based on Software Sizing and Development Methods

    ERIC Educational Resources Information Center

    Lafferty, Mark T.

    2010-01-01

    The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…

  9. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  10. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  11. Ask Pete, software planning and estimation through project characterization

    NASA Technical Reports Server (NTRS)

    Kurtz, T.

    2001-01-01

    Ask Pete, was developed by NASA to provide a tool for integrating the estimation and planning activities for a software development effort. It incorporates COCOMO II estimating with NASA's software development practices and IV&V criteria to characterize a project. This characterization is then used to generate estimates and tailored planning documents.

  12. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  13. Improving Software Engineering on NASA Projects

    NASA Technical Reports Server (NTRS)

    Crumbley, Tim; Kelly, John C.

    2010-01-01

    Software Engineering Initiative: Reduces risk of software failure -Increases mission safety. More predictable software cost estimates and delivery schedules. Smarter buyer of contracted out software. More defects found and removed earlier. Reduces duplication of efforts between projects. Increases ability to meet the challenges of evolving software technology.

  14. SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1994-01-01

    The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a file for later use. The SOFTCOST program is written in Microsoft BASIC for interactive execution and has been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. SOFTCOST was originally developed for the Zylog Z80 system running under CP/M in 1981. It was converted to run on the IBM PC XT/AT in 1986. SOFTCOST is a copyrighted work with all copyright vested in NASA.

  15. Lessons learned in deploying software estimation technology and tools

    NASA Technical Reports Server (NTRS)

    Panlilio-Yap, Nikki; Ho, Danny

    1994-01-01

    Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.

  16. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  17. Designing Control System Application Software for Change

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard

    2001-01-01

    The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.

  18. COSTMODL: An automated software development cost estimation tool

    NASA Technical Reports Server (NTRS)

    Roush, George B.

    1991-01-01

    The cost of developing computer software continues to consume an increasing portion of many organizations' total budgets, both in the public and private sector. As this trend develops, the capability to produce reliable estimates of the effort and schedule required to develop a candidate software product takes on increasing importance. The COSTMODL program was developed to provide an in-house capability to perform development cost estimates for NASA software projects. COSTMODL is an automated software development cost estimation tool which incorporates five cost estimation algorithms including the latest models for the Ada language and incrementally developed products. The principal characteristic which sets COSTMODL apart from other software cost estimation programs is its capacity to be completely customized to a particular environment. The estimation equations can be recalibrated to reflect the programmer productivity characteristics demonstrated by the user's organization, and the set of significant factors which effect software development costs can be customized to reflect any unique properties of the user's development environment. Careful use of a capability such as COSTMODL can significantly reduce the risk of cost overruns and failed projects.

  19. A Novel Rules Based Approach for Estimating Software Birthmark

    PubMed Central

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  20. Optimization of Skill Retention in the U.S. Army through Initial Training Analysis and Design. Volume 1.

    DTIC Science & Technology

    1983-05-01

    observed end-of-course scores for tasks .- trained to criterion. e MGA software was calibrated to provide retention estimates at two levels of...exceed the MGA estimates. Thirty-five out of forty, or 87.5,o0 of the tasks met this expectation. . * For these first trial data, MGA software predicts...Objective: The objective of this effort was to perform an operational test of the capability of MGA Skill Training and Retention (STAR©) software to

  1. The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    NASA Technical Reports Server (NTRS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  2. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  3. Enabling Software Acquisition Improvement: Government and Industry Software Development Team Acquisition Model

    DTIC Science & Technology

    2010-04-30

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this

  4. Understanding and Predicting the Process of Software Maintenance Releases

    NASA Technical Reports Server (NTRS)

    Basili, Victor; Briand, Lionel; Condon, Steven; Kim, Yong-Mi; Melo, Walcelio L.; Valett, Jon D.

    1996-01-01

    One of the major concerns of any maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are vital to successful maintenance management. The objective of this paper is to present the results of a case study in which an incremental approach was used to better understand the effort distribution of releases and build a predictive effort model for software maintenance releases. This study was conducted in the Flight Dynamics Division (FDD) of NASA Goddard Space Flight Center(GSFC). This paper presents three main results: 1) a predictive effort model developed for the FDD's software maintenance release process; 2) measurement-based lessons learned about the maintenance process in the FDD; and 3) a set of lessons learned about the establishment of a measurement-based software maintenance improvement program. In addition, this study provides insights and guidelines for obtaining similar results in other maintenance organizations.

  5. A generic open-source software framework supporting scenario simulations in bioterrorist crises.

    PubMed

    Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie

    2013-09-01

    Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.

  6. COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL

    NASA Technical Reports Server (NTRS)

    Roush, G. B.

    1994-01-01

    The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.

  7. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  8. Cost Estimation Techniques for C3I System Software.

    DTIC Science & Technology

    1984-07-01

    opment manmonth have been determined for maxi, midi , and mini .1 type computers. Small to median size timeshared developments used 0.2 to 1.5 hours...development schedule 1.23 1.00 1.10 2.1.3 Detailed Model The final codification of the COCOMO regressions was the development of separate effort...regardless of the software structure level being estimated: D8VC -- the expected development computer (maxi. midi . mini, micro) MODE -- the expected

  9. Effort Drivers Estimation for Brazilian Geographically Distributed Software Development

    NASA Astrophysics Data System (ADS)

    Almeida, Ana Carina M.; Souza, Renata; Aquino, Gibeon; Meira, Silvio

    To meet the requirements of today’s fast paced markets, it is important to develop projects on time and with the minimum use of resources. A good estimate is the key to achieve this goal. Several companies have started to work with geographically distributed teams due to cost reduction and time-to-market. Some researchers indicate that this approach introduces new challenges, because the teams work in different time zones and have possible differences in culture and language. It is already known that the multisite development increases the software cycle time. Data from 15 DSD projects from 10 distinct companies were collected. The analysis shows drivers that impact significantly the total effort planned to develop systems using DSD approach in Brazil.

  10. The impact of organizational structure on flight software cost risk

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Lum, Karen; Monson, Erik

    2004-01-01

    This paper summarizes the final results of the follow-up study updating the estimated software effort growth for those projects that were still under development and including an evaluation of the roles versus observed cost risk for the missions included in the original study which expands the data set to thirteen missions.

  11. Design and implementation of a Windows NT network to support CNC activities

    NASA Technical Reports Server (NTRS)

    Shearrow, C. A.

    1996-01-01

    The Manufacturing, Materials, & Processes Technology Division is undergoing dramatic changes to bring it's manufacturing practices current with today's technological revolution. The Division is developing Computer Automated Design and Computer Automated Manufacturing (CAD/CAM) abilities. The development of resource tracking is underway in the form of an accounting software package called Infisy. These two efforts will bring the division into the 1980's in relationship to manufacturing processes. Computer Integrated Manufacturing (CIM) is the final phase of change to be implemented. This document is a qualitative study and application of a CIM application capable of finishing the changes necessary to bring the manufacturing practices into the 1990's. The documentation provided in this qualitative research effort includes discovery of the current status of manufacturing in the Manufacturing, Materials, & Processes Technology Division including the software, hardware, network and mode of operation. The proposed direction of research included a network design, computers to be used, software to be used, machine to computer connections, estimate a timeline for implementation, and a cost estimate. Recommendation for the division's improvement include action to be taken, software to utilize, and computer configurations.

  12. A Strategy for Autogeneration of Space Shuttle Ground Processing Simulation Models for Project Makespan Estimations

    NASA Technical Reports Server (NTRS)

    Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.

    2005-01-01

    Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.

  13. Operations analysis (study 2.1): Shuttle upper stage software requirements

    NASA Technical Reports Server (NTRS)

    Wolfe, R. R.

    1974-01-01

    An investigation of software costs related to space shuttle upper stage operations with emphasis on the additional costs attributable to space servicing was conducted. The questions and problem areas include the following: (1) the key parameters involved with software costs; (2) historical data for extrapolation of future costs; (3) elements of the basic software development effort that are applicable to servicing functions; (4) effect of multiple servicing on complexity of the operation; and (5) are recurring software costs significant. The results address these questions and provide a foundation for estimating software costs based on the costs of similar programs and a series of empirical factors.

  14. Improving software quality - The use of formal inspections at the Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Bush, Marilyn

    1990-01-01

    The introduction of software formal inspections (Fagan Inspections) at JPL for finding and fixing defects early in the software development life cycle are reviewed. It is estimated that, by the year 2000, some software efforts will rise to as much as 80 percent of the total. Software problems are especially important at NASA as critical flight software must be error-free. It is shown that formal inspections are particularly effective at finding and removing defects having to do with clarity, correctness, consistency, and completeness. A very significant discovery was that code audits were not as effective at finding defects as code inspections.

  15. Software Cost Estimation Using a Decision Graph Process: A Knowledge Engineering Approach

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry; Spagnuolo, John, Jr.

    2011-01-01

    This paper is not a description per se of the efforts by two software cost analysts. Rather, it is an outline of the methodology used for FSW cost analysis presented in a form that would serve as a foundation upon which others may gain insight into how to perform FSW cost analyses for their own problems at hand.

  16. Cost and schedule estimation study report

    NASA Technical Reports Server (NTRS)

    Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon

    1993-01-01

    This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.

  17. Software Engineering Improvement Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  18. A GIS-based framework for evaluating investments in fire management: Spatial allocation of recreation values

    Treesearch

    Kenneth A. Baerenklau; Armando González-Cabán; Catrina I. Páez; Edgard Chávez

    2009-01-01

    The U.S. Forest Service is responsible for developing tools to facilitate effective and efficient fire management on wildlands and urban-wildland interfaces. Existing GIS-based fire modeling software only permits estimation of the costs of fire prevention and mitigation efforts as well as the effects of those efforts on fire behavior. This research demonstrates how the...

  19. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  20. Multi-crop area estimation and mapping on a microprocessor/mainframe network

    NASA Technical Reports Server (NTRS)

    Sheffner, E.

    1985-01-01

    The data processing system is outlined for a 1985 test aimed at determining the performance characteristics of area estimation and mapping procedures connected with the California Cooperative Remote Sensing Project. The project is a joint effort of the USDA Statistical Reporting Service-Remote Sensing Branch, the California Department of Water Resources, NASA-Ames Research Center, and the University of California Remote Sensing Research Program. One objective of the program was to study performance when data processing is done on a microprocessor/mainframe network under operational conditions. The 1985 test covered the hardware, software, and network specifications and the integration of these three components. Plans for the year - including planned completion of PEDITOR software, testing of software on MIDAS, and accomplishment of data processing on the MIDAS-VAX-CRAY network - are discussed briefly.

  1. When is Testing Sufficient

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush

    1999-01-01

    The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.

  2. Evaluation of the fishery status for King Soldier Bream Argyrops spinifer in Pakistan using the software CEDA and ASPIC

    NASA Astrophysics Data System (ADS)

    Memon, Aamir Mahmood; Liu, Qun; Memon, Khadim Hussain; Baloch, Wazir Ali; Memon, Asfandyar; Baset, Abdul

    2015-07-01

    Catch and effort data were analyzed to estimate the maximum sustainable yield (MSY) of King Soldier Bream, Argyrops spinifer (Forsskål, 1775, Family: Sparidae), and to evaluate the present status of the fish stocks exploited in Pakistani waters. The catch and effort data for the 25-years period 1985-2009 were analyzed using two computer software packages, CEDA (catch and effort data analysis) and ASPIC (a surplus production model incorporating covariates). The maximum catch of 3 458 t was observed in 1988 and the minimum catch of 1 324 t in 2005, while the average annual catch of A. spinifer over the 25 years was 2 500 t. The surplus production models of Fox, Schaefer, and Pella Tomlinson under three error assumptions of normal, log-normal and gamma are in the CEDA package and the two surplus models of Fox and logistic are in the ASPIC package. In CEDA, the MSY was estimated by applying the initial proportion (IP) of 0.8, because the starting catch was approximately 80% of the maximum catch. Except for gamma, because gamma showed maximization failures, the estimated results of MSY using CEDA with the Fox surplus production model and two error assumptions, were 1 692.08 t ( R 2=0.572) and 1 694.09 t ( R 2=0.606), respectively, and from the Schaefer and the Pella Tomlinson models with two error assumptions were 2 390.95 t ( R 2=0.563), and 2 380.06 t ( R 2=0.605), respectively. The MSY estimated by the Fox model was conservatively compared to the Schaefer and Pella Tomlinson models. The MSY values from Schaefer and Pella Tomlinson models were the same. The computed values of MSY using the ASPIC computer software program with the two surplus production models of Fox and logistic were 1 498 t ( R 2=0.917), and 2 488 t ( R 2=0.897) respectively. The estimated values of MSY using CEDA were about 1 700-2 400 t and the values from ASPIC were 1 500-2 500 t. The estimates output by the CEDA and the ASPIC packages indicate that the stock is overfished, and needs some effective management to reduce the fishing effort of the species in Pakistani waters.

  3. Department of Defense Software Factbook

    DTIC Science & Technology

    2017-07-07

    parameters, these rules of thumb may not provide a lot of value to project managers estimating their software efforts. To get the information useful to them...organization determine the total cost of a particular project , but it is a useful metric to technical managers when they are required to submit an annual...outcome. It is most likely a combination of engineering, management , and funding factors. Although a project may resist planning a schedule slip, this

  4. Testing Software Development Project Productivity Model

    NASA Astrophysics Data System (ADS)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

  5. Development and Application of New Quality Model for Software Projects

    PubMed Central

    Karnavel, K.; Dillibabu, R.

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects. PMID:25478594

  6. Development and application of new quality model for software projects.

    PubMed

    Karnavel, K; Dillibabu, R

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.

  7. Building a Science Software Institute: Synthesizing the Lessons Learned from the ISEES and WSSI Software Institute Conceptualization Efforts

    NASA Astrophysics Data System (ADS)

    Idaszak, R.; Lenhardt, W. C.; Jones, M. B.; Ahalt, S.; Schildhauer, M.; Hampton, S. E.

    2014-12-01

    The NSF, in an effort to support the creation of sustainable science software, funded 16 science software institute conceptualization efforts. The goal of these conceptualization efforts is to explore approaches to creating the institutional, sociological, and physical infrastructures to support sustainable science software. This paper will present the lessons learned from two of these conceptualization efforts, the Institute for Sustainable Earth and Environmental Software (ISEES - http://isees.nceas.ucsb.edu) and the Water Science Software Institute (WSSI - http://waters2i2.org). ISEES is a multi-partner effort led by National Center for Ecological Analysis and Synthesis (NCEAS). WSSI, also a multi-partner effort, is led by the Renaissance Computing Institute (RENCI). The two conceptualization efforts have been collaborating due to the complementarity of their approaches and given the potential synergies of their science focus. ISEES and WSSI have engaged in a number of activities to address the challenges of science software such as workshops, hackathons, and coding efforts. More recently, the two institutes have also collaborated on joint activities including training, proposals, and papers. In addition to presenting lessons learned, this paper will synthesize across the two efforts to project a unified vision for a science software institute.

  8. A survey of program slicing for software engineering

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    This research concerns program slicing which is used as a tool for program maintainence of software systems. Program slicing decreases the level of effort required to understand and maintain complex software systems. It was first designed as a debugging aid, but it has since been generalized into various tools and extended to include program comprehension, module cohesion estimation, requirements verification, dead code elimination, and maintainence of several software systems, including reverse engineering, parallelization, portability, and reuse component generation. This paper seeks to address and define terminology, theoretical concepts, program representation, different program graphs, developments in static slicing, dynamic slicing, and semantics and mathematical models. Applications for conventional slicing are presented, along with a prognosis of future work in this field.

  9. Technology Estimating 2: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.

    2014-01-01

    As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.

  10. The software product assurance metrics study: JPL's software systems quality and productivity

    NASA Technical Reports Server (NTRS)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  11. The MERMAID project

    NASA Technical Reports Server (NTRS)

    Cowderoy, A. J. C.; Jenkins, John O.; Poulymenakou, A

    1992-01-01

    The tendency for software development projects to be completed over schedule and over budget was documented extensively. Additionally many projects are completed within budgetary and schedule target only as a result of the customer agreeing to accept reduced functionality. In his classic book, The Mythical Man Month, Fred Brooks exposes the fallacy that effort and schedule are freely interchangeable. All current cost models are produced on the assumption that there is very limited scope for schedule compression unless there is a corresponding reduction in delivered functionality. The Metrication and Resources Modeling Aid (MERMAID) project, partially financed by the Commission of the European Communities (CEC) as Project 2046 began in Oct. 1988 and its goal were as follows: (1) improvement of understanding of the relationships between software development productivity and product and process metrics; (2) to facilitate the widespread technology transfer from the Consortium to the European Software Industry; and (3) to facilitate the widespread uptake of cost estimation techniques by the provision of prototype cost estimation tools. MERMAID developed a family of methods for cost estimation, many of which have had tools implemented in prototypes. These prototypes are best considered as toolkits or workbenches.

  12. Program SimAssem: software for simulating species assemblages and estimating species richness

    Treesearch

    Gordon C. Reese; Kenneth R. Wilson; Curtis H. Flather

    2013-01-01

    1. Species richness, the number of species in a defined area, is the most frequently used biodiversity measure. Despite its intuitive appeal and conceptual simplicity, species richness is often difficult to quantify, even in well surveyed areas, because of sampling limitations such as survey effort and species detection probability....

  13. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  14. Model-based software for simulating ultrasonic pulse/echo inspections of metal components

    NASA Astrophysics Data System (ADS)

    Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.; Barnard, Daniel J.

    2017-02-01

    Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at Iowa State University, an effort was initiated in 2015 to repackage existing research-grade software into user friendly tools for the rapid estimation of signal-to-noise ratio (S/N) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray Model for the response from an internal defect and the Independent Scatterer Model for backscattered grain noise. This paper provides an overview of the ongoing modeling effort with emphasis on recent developments. These include: treatment of angle-beam inspections, implementation of distance-amplitude corrections, changes in the generation of "invented" calibration signals, efforts to simulate ultrasonic C-scans; and experimental testing of model predictions. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components having equiaxed microstructures in which the grain size varies with depth. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-squared grain noise amplitudes, and S/N as functions of the depth of the defect within the metal component. At any particular depth, the user can view a simulated A-scan displaying the superimposed defect and grain-noise waveforms. The realistic grain noise signals used in the A-scans are generated from a set of measured "universal" noise signals whose strengths and spectral characteristics are altered to match predicted noise characteristics for the simulation at hand. We present simulation examples demonstrating recent developments, and discuss plans to improve simulator capabilities.

  15. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  16. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  17. Software Measurement Guidebook. Version 02.00.02

    DTIC Science & Technology

    1992-12-01

    Compatibility Testing Process .............................. 9-5 Figure 9-3. Development Effort Planning Curve ................................. 9-7 Figure 10-1...requirements, design, code, and test and for analyzing this data. "* Proposal Manager. The person responsible for describing and supporting the estimated...designed, build/elease ranges, variances, and comparisons size growth; costs; completions; and content, units completing test , units with historical

  18. Source Lines Counter (SLiC) Version 4.0

    NASA Technical Reports Server (NTRS)

    Monson, Erik W.; Smith, Kevin A.; Newport, Brian J.; Gostelow, Roli D.; Hihn, Jairus M.; Kandt, Ronald K.

    2011-01-01

    Source Lines Counter (SLiC) is a software utility designed to measure software source code size using logical source statements and other common measures for 22 of the programming languages commonly used at NASA and the aerospace industry. Such metrics can be used in a wide variety of applications, from parametric cost estimation to software defect analysis. SLiC has a variety of unique features such as automatic code search, automatic file detection, hierarchical directory totals, and spreadsheet-compatible output. SLiC was written for extensibility; new programming language support can be added with minimal effort in a short amount of time. SLiC runs on a variety of platforms including UNIX, Windows, and Mac OSX. Its straightforward command-line interface allows for customization and incorporation into the software build process for tracking development metrics. T

  19. Estimating Velocities of Glaciers Using Sentinel-1 SAR Imagery

    NASA Astrophysics Data System (ADS)

    Gens, R.; Arnoult, K., Jr.; Friedl, P.; Vijay, S.; Braun, M.; Meyer, F. J.; Gracheva, V.; Hogenson, K.

    2017-12-01

    In an international collaborative effort, software has been developed to estimate the velocities of glaciers by using Sentinel-1 Synthetic Aperture Radar (SAR) imagery. The technique, initially designed by the University of Erlangen-Nuremberg (FAU), has been previously used to quantify spatial and temporal variabilities in the velocities of surging glaciers in the Pakistan Karakoram. The software estimates surface velocities by first co-registering image pairs to sub-pixel precision and then by estimating local offsets based on cross-correlation. The Alaska Satellite Facility (ASF) at the University of Alaska Fairbanks (UAF) has modified the software to make it more robust and also capable of migration into the Amazon Cloud. Additionally, ASF has implemented a prototype that offers the glacier tracking processing flow as a subscription service as part of its Hybrid Pluggable Processing Pipeline (HyP3). Since the software is co-located with ASF's cloud-based Sentinel-1 archive, processing of large data volumes is now more efficient and cost effective. Velocity maps are estimated for Single Look Complex (SLC) SAR image pairs and a digital elevation model (DEM) of the local topography. A time series of these velocity maps then allows the long-term monitoring of these glaciers. Due to the all-weather capabilities and the dense coverage of Sentinel-1 data, the results are complementary to optically generated ones. Together with the products from the Global Land Ice Velocity Extraction project (GoLIVE) derived from Landsat 8 data, glacier speeds can be monitored more comprehensively. Examples from Sentinel-1 SAR-derived results are presented along with optical results for the same glaciers.

  20. Techniques and software tools for estimating ultrasonic signal-to-noise ratios

    NASA Astrophysics Data System (ADS)

    Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.

    2016-02-01

    At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average grain size may vary with depth. The defect may be a flat-bottomed-hole reference reflector, a spherical void or a spherical inclusion. In future generations of the software, microstructures and defect types will be generalized and oblique incidence inspections will be treated as well. This paper provides an overview of the modeling approach and presents illustrative results output by the first-generation software.

  1. Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults

    NASA Technical Reports Server (NTRS)

    Hamill, Maggie; Goseva-Popstojanova, Katerina

    2016-01-01

    Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.

  2. AdaNET research plan

    NASA Technical Reports Server (NTRS)

    Mcbride, John G.

    1990-01-01

    The mission of the AdaNET research effort is to determine how to increase the availability of reusable Ada components and associated software engineering technology to both private and Federal sectors. The effort is structured to define the requirements for transfer of Federally developed software technology, study feasible approaches to meeting the requirements, and to gain experience in applying various technologies and practices. The overall approach to the development of the AdaNET System Specification is presented. A work breakdown structure is presented with each research activity described in detail. The deliverables for each work area are summarized. The overall organization and responsibilities for each research area are described. The schedule and necessary resources are presented for each research activity. The estimated cost is summarized for each activity. The project plan is fully described in the Super Project Expert data file contained on the floppy disk attached to the back cover of this plan.

  3. Software Certification and Software Certificate Management Systems

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2005-01-01

    Incremental certification and re-certification of code as it is developed and modified is a prerequisite for applying modem, evolutionary development processes, which are especially relevant for NASA. For example, the Columbia Accident Investigation Board (CAIB) report 121 concluded there is "the need for improved and uniform statistical sampling, audit, and certification processes". Also, re-certification time has been a limiting factor in making changes to Space Shuttle code close to launch time. This is likely to be an even bigger problem with the rapid turnaround required in developing NASA s replacement for the Space Shuttle, the Crew Exploration Vehicle (CEV). Hence, intelligent development processes are needed which place certification at the center of development. If certification tools provide useful information, such as estimated time and effort, they are more likely to be adopted. The ultimate impact of such a tool will be reduced effort and increased reliability.

  4. Data Service Provider Cost Estimation Tool

    NASA Technical Reports Server (NTRS)

    Fontaine, Kathy; Hunolt, Greg; Booth, Arthur L.; Banks, Mel

    2011-01-01

    The Data Service Provider Cost Estimation Tool (CET) and Comparables Database (CDB) package provides to NASA s Earth Science Enterprise (ESE) the ability to estimate the full range of year-by-year lifecycle cost estimates for the implementation and operation of data service providers required by ESE to support its science and applications programs. The CET can make estimates dealing with staffing costs, supplies, facility costs, network services, hardware and maintenance, commercial off-the-shelf (COTS) software licenses, software development and sustaining engineering, and the changes in costs that result from changes in workload. Data Service Providers may be stand-alone or embedded in flight projects, field campaigns, research or applications projects, or other activities. The CET and CDB package employs a cost-estimation-by-analogy approach. It is based on a new, general data service provider reference model that provides a framework for construction of a database by describing existing data service providers that are analogs (or comparables) to planned, new ESE data service providers. The CET implements the staff effort and cost estimation algorithms that access the CDB and generates the lifecycle cost estimate for a new data services provider. This data creates a common basis for an ESE proposal evaluator for considering projected data service provider costs.

  5. Using Pilots to Assess the Value and Approach of CMMI Implementation

    NASA Technical Reports Server (NTRS)

    Godfrey, Sara; Andary, James; Rosenberg, Linda

    2002-01-01

    At Goddard Space Flight Center (GSFC), we have chosen to use Capability Maturity Model Integrated (CMMI) to guide our process improvement program. Projects at GSFC consist of complex systems of software and hardware that control satellites, operate ground systems, run instruments, manage databases and data and support scientific research. It is a challenge to launch a process improvement program that encompasses our diverse systems, yet is manageable in terms of cost effectiveness. In order to establish the best approach for improvement, our process improvement effort was divided into three phases: 1) Pilot projects; 2) Staged implementation; and 3) Sustainment and continual improvement. During Phase 1 the focus of the activities was on a baselining process, using pre-appraisals in order to get a baseline for making a better cost and effort estimate for the improvement effort. Pilot pre-appraisals were conducted from different perspectives so different approaches for process implementation could be evaluated. Phase 1 also concentrated on establishing an improvement infrastructure and training of the improvement teams. At the time of this paper, three pilot appraisals have been completed. Our initial appraisal was performed in a flight software area, considering the flight software organization as the organization. The second appraisal was done from a project perspective, focusing on systems engineering and acquisition, and using the organization as GSFC. The final appraisal was in a ground support software area, again using GSFC as the organization. This paper will present our initial approach, lessons learned from all three pilots and the changes in our approach based on the lessons learned.

  6. Control System Architectures, Technologies and Concepts for Near Term and Future Human Exploration of Space

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard; Overland, David

    2004-01-01

    Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.

  7. Remote Sensing and Capacity Building to Improve Food Security

    NASA Astrophysics Data System (ADS)

    Husak, G. J.; Funk, C. C.; Verdin, J. P.; Rowland, J.; Budde, M. E.

    2012-12-01

    The Famine Early Warning Systems Network (FEWS NET) is a U.S. Agency for International Development (USAID) supported project designed to monitor and anticipate food insecurity in the developing world, primarily Africa, Central America, the Caribbean and Central Asia. This is done through a network of partners involving U.S. government agencies, universities, country representatives, and partner institutions. This presentation will focus on the remotely sensed data used in FEWS NET activities and capacity building efforts designed to expand and enhance the use of FEWS NET tools and techniques. Remotely sensed data are of particular value in the developing world, where ground data networks and data reporting are limited. FEWS NET uses satellite based rainfall and vegetation greenness measures to monitor and assess food production conditions. Satellite rainfall estimates also drive crop models which are used in determining yield potential. Recent FEWS NET products also include estimates of actual evapotranspiration. Efforts are currently underway to assimilate these products into a single tool which would indicate areas experiencing abnormal conditions with implications for food production. FEWS NET is also involved in a number of capacity building activities. Two primary examples are the development of software and training of institutional partners in basic GIS and remote sensing. Software designed to incorporate rainfall station data with existing satellite-derived rainfall estimates gives users the ability to enhance satellite rainfall estimates or long-term means, resulting in gridded fields of rainfall that better reflect ground conditions. Further, this software includes a crop water balance model driven by the improved rainfall estimates. Finally, crop parameters, such as the planting date or length of growing period, can be adjusted by users to tailor the crop model to actual conditions. Training workshops in the use of this software, as well as basic GIS and remote sensing tools, are routinely conducted by FEWS NET representatives at host country meteorological and agricultural services. These institutions are then able to produce information that can more accurately inform food security decision making. Informed decision making reduces the risk associated with a given hazard. In the case of FEWS NET, this involves identification of shocks to food availability, allowing for the pre-positioning of aid to be available when a hazard strikes. Developing tools to incorporate better information in food production estimates and working closely with local staff trained in state-of-the-practice techniques results in a more informed decision making process, reducing the impacts of food security hazards.

  8. Assessment of ICount software, a precise and fast egg counting tool for the mosquito vector Aedes aegypti.

    PubMed

    Gaburro, Julie; Duchemin, Jean-Bernard; Paradkar, Prasad N; Nahavandi, Saeid; Bhatti, Asim

    2016-11-18

    Widespread in the tropics, the mosquito Aedes aegypti is an important vector of many viruses, posing a significant threat to human health. Vector monitoring often requires fecundity estimation by counting eggs laid by female mosquitoes. Traditionally, manual data analyses have been used but this requires a lot of effort and is the methods are prone to errors. An easy tool to assess the number of eggs laid would facilitate experimentation and vector control operations. This study introduces a built-in software called ICount allowing automatic egg counting of the mosquito vector, Aedes aegypti. ICount egg estimation compared to manual counting is statistically equivalent, making the software effective for automatic and semi-automatic data analysis. This technique also allows rapid analysis compared to manual methods. Finally, the software has been used to assess p-cresol oviposition choices under laboratory conditions in order to test the system with different egg densities. ICount is a powerful tool for fast and precise egg count analysis, freeing experimenters from manual data processing. Software access is free and its user-friendly interface allows easy use by non-experts. Its efficiency has been tested in our laboratory with oviposition dual choices of Aedes aegypti females. The next step will be the development of a mobile application, based on the ICount platform, for vector monitoring surveys in the field.

  9. Software forecasting as it is really done: A study of JPL software engineers

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.

    1993-01-01

    This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.

  10. CubeSat mission design software tool for risk estimating relationships

    NASA Astrophysics Data System (ADS)

    Gamble, Katharine Brumbaugh; Lightsey, E. Glenn

    2014-09-01

    In an effort to make the CubeSat risk estimation and management process more scientific, a software tool has been created that enables mission designers to estimate mission risks. CubeSat mission designers are able to input mission characteristics, such as form factor, mass, development cycle, and launch information, in order to determine the mission risk root causes which historically present the highest risk for their mission. Historical data was collected from the CubeSat community and analyzed to provide a statistical background to characterize these Risk Estimating Relationships (RERs). This paper develops and validates the mathematical model based on the same cost estimating relationship methodology used by the Unmanned Spacecraft Cost Model (USCM) and the Small Satellite Cost Model (SSCM). The RER development uses general error regression models to determine the best fit relationship between root cause consequence and likelihood values and the input factors of interest. These root causes are combined into seven overall CubeSat mission risks which are then graphed on the industry-standard 5×5 Likelihood-Consequence (L-C) chart to help mission designers quickly identify areas of concern within their mission. This paper is the first to document not only the creation of a historical database of CubeSat mission risks, but, more importantly, the scientific representation of Risk Estimating Relationships.

  11. Preliminary development of digital signal processing in microwave radiometers

    NASA Technical Reports Server (NTRS)

    Stanley, W. D.

    1980-01-01

    Topics covered involve a number of closely related tasks including: the development of several control loop and dynamic noise model computer programs for simulating microwave radiometer measurements; computer modeling of an existing stepped frequency radiometer in an effort to determine its optimum operational characteristics; investigation of the classical second order analog control loop to determine its ability to reduce the estimation error in a microwave radiometer; investigation of several digital signal processing unit designs; initiation of efforts to develop required hardware and software for implementation of the digital signal processing unit; and investigation of the general characteristics and peculiarities of digital processing noiselike microwave radiometer signals.

  12. Toward Dietary Assessment via Mobile Phone Video Cameras.

    PubMed

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-11-13

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.

  13. Integrated Software for Analyzing Designs of Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Philips, Alan D.

    2003-01-01

    Launch Vehicle Analysis Tool (LVA) is a computer program for preliminary design structural analysis of launch vehicles. Before LVA was developed, in order to analyze the structure of a launch vehicle, it was necessary to estimate its weight, feed this estimate into a program to obtain pre-launch and flight loads, then feed these loads into structural and thermal analysis programs to obtain a second weight estimate. If the first and second weight estimates differed, it was necessary to reiterate these analyses until the solution converged. This process generally took six to twelve person-months of effort. LVA incorporates text to structural layout converter, configuration drawing, mass properties generation, pre-launch and flight loads analysis, loads output plotting, direct solution structural analysis, and thermal analysis subprograms. These subprograms are integrated in LVA so that solutions can be iterated automatically. LVA incorporates expert-system software that makes fundamental design decisions without intervention by the user. It also includes unique algorithms based on extensive research. The total integration of analysis modules drastically reduces the need for interaction with the user. A typical solution can be obtained in 30 to 60 minutes. Subsequent runs can be done in less than two minutes.

  14. Generalized Support Software: Domain Analysis and Implementation

    NASA Technical Reports Server (NTRS)

    Stark, Mike; Seidewitz, Ed

    1995-01-01

    For the past five years, the Flight Dynamics Division (FDD) at NASA's Goddard Space Flight Center has been carrying out a detailed domain analysis effort and is now beginning to implement Generalized Support Software (GSS) based on this analysis. GSS is part of the larger Flight Dynamics Distributed System (FDDS), and is designed to run under the FDDS User Interface / Executive (UIX). The FDD is transitioning from a mainframe based environment to systems running on engineering workstations. The GSS will be a library of highly reusable components that may be configured within the standard FDDS architecture to quickly produce low-cost satellite ground support systems. The estimates for the first release is that this library will contain approximately 200,000 lines of code. The main driver for developing generalized software is development cost and schedule improvement. The goal is to ultimately have at least 80 percent of all software required for a spacecraft mission (within the domain supported by the GSS) to be configured from the generalized components.

  15. Designing the modern pump: engineering aspects of continuous subcutaneous insulin infusion software.

    PubMed

    Welsh, John B; Vargas, Steven; Williams, Gary; Moberg, Sheldon

    2010-06-01

    Insulin delivery systems attracted the efforts of biological, mechanical, electrical, and software engineers well before they were commercially viable. The introduction of the first commercial insulin pump in 1983 represents an enduring milestone in the history of diabetes management. Since then, pumps have become much more than motorized syringes and have assumed a central role in diabetes management by housing data on insulin delivery and glucose readings, assisting in bolus estimation, and interfacing smoothly with humans and compatible devices. Ensuring the integrity of the embedded software that controls these devices is critical to patient safety and regulatory compliance. As pumps and related devices evolve, software engineers will face challenges and opportunities in designing pumps that are safe, reliable, and feature-rich. The pumps and related systems must also satisfy end users, healthcare providers, and regulatory authorities. In particular, pumps that are combined with glucose sensors and appropriate algorithms will provide the basis for increasingly safe and precise automated insulin delivery-essential steps to developing a fully closed-loop system.

  16. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  17. Standardization in software conversion of (ROM) estimating

    NASA Technical Reports Server (NTRS)

    Roat, G. H.

    1984-01-01

    Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.

  18. Automated support for experience-based software management

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.

    1992-01-01

    To effectively manage a software development project, the software manager must have access to key information concerning a project's status. This information includes not only data relating to the project of interest, but also, the experience of past development efforts within the environment. This paper describes the concepts and functionality of a software management tool designed to provide this information. This tool, called the Software Management Environment (SME), enables the software manager to compare an ongoing development effort with previous efforts and with models of the 'typical' project within the environment, to predict future project status, to analyze a project's strengths and weaknesses, and to assess the project's quality. In order to provide these functions the tool utilizes a vast corporate memory that includes a data base of software metrics, a set of models and relationships that describe the software development environment, and a set of rules that capture other knowledge and experience of software managers within the environment. Integrating these major concepts into one software management tool, the SME is a model of the type of management tool needed for all software development organizations.

  19. Software archeology: a case study in software quality assurance and design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macdonald, John M; Lloyd, Jane A; Turner, Cameron J

    2009-01-01

    Ideally, quality is designed into software, just as quality is designed into hardware. However, when dealing with legacy systems, demonstrating that the software meets required quality standards may be difficult to achieve. As the need to demonstrate the quality of existing software was recognized at Los Alamos National Laboratory (LANL), an effort was initiated to uncover and demonstrate that legacy software met the required quality standards. This effort led to the development of a reverse engineering approach referred to as software archaeology. This paper documents the software archaeology approaches used at LANL to document legacy software systems. A case studymore » for the Robotic Integrated Packaging System (RIPS) software is included.« less

  20. Projection of Maximum Software Maintenance Manning Levels.

    DTIC Science & Technology

    1982-06-01

    mainte- nance team development and for outyear support resource estimation, and to provide an analysis of applications of the model in areas other...by General Research Corporation of Santa Barbara, Ca., indicated that the Planning and Resource Management Information System (PARRIS) at the Air Force...determined that when the optimal input effort is applied, steps in the development would be achieved at a rate proportional to V(t). Thus the work-rate could

  1. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  2. The Software Management Environment (SME)

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.; Decker, William; Buell, John

    1988-01-01

    The Software Management Environment (SME) is a research effort designed to utilize the past experiences and results of the Software Engineering Laboratory (SEL) and to incorporate this knowledge into a tool for managing projects. SME provides the software development manager with the ability to observe, compare, predict, analyze, and control key software development parameters such as effort, reliability, and resource utilization. The major components of the SME, the architecture of the system, and examples of the functionality of the tool are discussed.

  3. Earthquake Loss Estimates in Near Real-Time

    NASA Astrophysics Data System (ADS)

    Wyss, Max; Wang, Rongjiang; Zschau, Jochen; Xia, Ye

    2006-10-01

    The usefulness to rescue teams of nearreal-time loss estimates after major earthquakes is advancing rapidly. The difference in the quality of data available in highly developed compared with developing countries dictates that different approaches be used to maximize mitigation efforts. In developed countries, extensive information from tax and insurance records, together with accurate census figures, furnish detailed data on the fragility of buildings and on the number of people at risk. For example, these data are exploited by the method to estimate losses used in the Hazards U.S. Multi-Hazard (HAZUSMH)software program (http://www.fema.gov/plan/prevent/hazus/). However, in developing countries, the population at risk is estimated from inferior data sources and the fragility of the building stock often is derived empirically, using past disastrous earthquakes for calibration [Wyss, 2004].

  4. Advances in model-based software for simulating ultrasonic immersion inspections of metal components

    NASA Astrophysics Data System (ADS)

    Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; Engle, Brady J.; Roberts, Ronald A.

    2018-04-01

    Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was initiated in 2015 to repackage existing research-grade software into user-friendly tools for the rapid estimation of signal-to-noise ratio (SNR) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray measurement model for the response from an internal defect, and the Thompson-Margetan independent scatterer model for backscattered grain noise. This paper, the third in the series [1-2], provides an overview of the ongoing modeling effort with emphasis on recent developments. These include the ability to: (1) treat microstructures where grain size, shape and tilt relative to the incident sound direction can all vary with depth; and (2) simulate C-scans of defect signals in the presence of backscattered grain noise. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-square values of grain noise amplitudes, and SNR as functions of the depth of the defect within the metal component. At any particular depth, the user can view a simulated A-, B-, and C-scans displaying the superimposed defect and grain-noise waveforms. The realistic grain noise signals used in the A-scans are generated from a set of measured "universal" noise signals whose strengths and spectral characteristics are altered to match predicted noise characteristics for the simulation at hand.

  5. A Planning Tool for Estimating Waste Generated by a Radiological Incident and Subsequent Decontamination Efforts - 13569

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boe, Timothy; Lemieux, Paul; Schultheisz, Daniel

    2013-07-01

    Management of debris and waste from a wide-area radiological incident would probably constitute a significant percentage of the total remediation cost and effort. The U.S. Environmental Protection Agency's (EPA's) Waste Estimation Support Tool (WEST) is a unique planning tool for estimating the potential volume and radioactivity levels of waste generated by a radiological incident and subsequent decontamination efforts. The WEST was developed to support planners and decision makers by generating a first-order estimate of the quantity and characteristics of waste resulting from a radiological incident. The tool then allows the user to evaluate the impact of various decontamination/demolition strategies onmore » the waste types and volumes generated. WEST consists of a suite of standalone applications and Esri{sup R} ArcGIS{sup R} scripts for rapidly estimating waste inventories and levels of radioactivity generated from a radiological contamination incident as a function of user-defined decontamination and demolition approaches. WEST accepts Geographic Information System (GIS) shape-files defining contaminated areas and extent of contamination. Building stock information, including square footage, building counts, and building composition estimates are then generated using the Federal Emergency Management Agency's (FEMA's) Hazus{sup R}-MH software. WEST then identifies outdoor surfaces based on the application of pattern recognition to overhead aerial imagery. The results from the GIS calculations are then fed into a Microsoft Excel{sup R} 2007 spreadsheet with a custom graphical user interface where the user can examine the impact of various decontamination/demolition scenarios on the quantity, characteristics, and residual radioactivity of the resulting waste streams. (authors)« less

  6. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  7. Calibration of the Software Architecture Sizing and Estimation Tool (SASET).

    DTIC Science & Technology

    1995-09-01

    model is of more value than the uncalibrated one. Also, as will be discussed in Chapters 3 and 4, there are quite a few manual (and undocumented) steps...complexity, normalized effective size, and normalized effort. One other field ("development phases included") was extracted manually since it was not listed...Bowden, R.G., Cheadle, W.G., & Ratliff, R.W. SASET 3.0 Technical Reference Manual . Publication S-3730-93-2. Denver: Martin Marietta Astronautics

  8. Software Development Projects: Estimation of Cost and Effort (A Manager’s Digest).

    DTIC Science & Technology

    1982-12-01

    it later when changes need to be made to it. Various levels of difficulty are experiencel .4 due to the skill level of the programmer, poor Orcaram...impurities that if eliminatel 4 reduce the level of complexity of the program. They are as follows: 16 1. Complementary Operations: unreduced expressions 2...greater quality than that to support standard business applications. Remus defines quality as "...the number of program defects normalized by size over

  9. Cost Estimation of Software Development and the Implications for the Program Manager

    DTIC Science & Technology

    1992-06-01

    Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome

  10. Software Measurement Guidebook

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This Software Measurement Guidebook is based on the extensive experience of several organizations that have each developed and applied significant measurement programs over a period of at least 10 years. The lessons derived from those experiences reflect not only successes but also failures. By applying those lessons, an organization can minimize, or at least reduce, the time, effort, and frustration of introducing a software measurement program. The Software Measurement Guidebook is aimed at helping organizations to begin or improve a measurement program. It does not provide guidance for the extensive application of specific measures (such as how to estimate software cost or analyze software complexity) other than by providing examples to clarify points. It does contain advice for establishing and using an effective software measurement program and for understanding some of the key lessons that other organizations have learned. Some of that advice will appear counterintuitive, but it is all based on actual experience. Although all of the information presented in this guidebook is derived from specific experiences of mature measurement programs, the reader must keep in mind that the characteristics of every organization are unique. Some degree of measurement is critical for all software development and maintenance organizations, and most of the key rules captured in this report will be generally applicable. Nevertheless, each organization must strive to understand its own environment so that the measurement program can be tailored to suit its characteristics and needs.

  11. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  12. Space Station Freedom biomedical monitoring and countermeasures: Biomedical facility hardware catalog

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This hardware catalog covers that hardware proposed under the Biomedical Monitoring and Countermeasures Development Program supported by the Johnson Space Center. The hardware items are listed separately by item, and are in alphabetical order. Each hardware item specification consists of four pages. The first page describes background information with an illustration, definition and a history/design status. The second page identifies the general specifications, performance, rack interface requirements, problems, issues, concerns, physical description, and functional description. The level of hardware design reliability is also identified under the maintainability and reliability category. The third page specifies the mechanical design guidelines and assumptions. Described are the material types and weights, modules, and construction methods. Also described is an estimation of percentage of construction which utilizes a particular method, and the percentage of required new mechanical design is documented. The fourth page analyzes the electronics, the scope of design effort, and the software requirements. Electronics are described by percentages of component types and new design. The design effort, as well as, the software requirements are identified and categorized.

  13. Population estimates and monitoring guidelines for endangered Laysan Teal, Anas Laysanensis, at Midway Atoll: Pilot study results 2008-2010.

    USGS Publications Warehouse

    Reynolds, Michelle H.; Brinck, Kevin W.; Laniawe, Leona

    2011-01-01

    To improve the Laysan Teal population estimates, we recommend changes to the monitoring protocol. Additional years of data are needed to quantify inter-annual seasonal detection probabilities, which may allow the use of standardized direct counts as an unbiased index of population size. Survey protocols should be enhanced through frequent resights, regular survey intervals, and determining reliable standards to detect catastrophic declines and annual changes in adult abundance. In late 2009 to early 2010, 68% of the population was marked with unique color band combinations. This allowed for potentially accurate adult population estimates and survival estimates without the need to mark new birds in 2010, 2011, and possibly 2012. However, efforts should be made to replace worn or illegible bands so birds can be identified in future surveys. It would be valuable to develop more sophisticated population size and survival models using Program MARK, a state-of-the-art software package which uses likelihood models to analyze mark-recapture data. This would allow for more reliable adult population and survival estimates to compare with the ―source‖ Laysan Teal population on Laysan Island. These models will require additional years of resight data (> 1 year) and, in some cases, an intensive annual effort of marking and recapture. Because data indicate standardized all-wetland counts are a poor index of abundance, monitoring efforts could be improved by expanding resight surveys to include all wetlands, discontinuing the all-wetland counts, and reallocating some of the wetland count effort to collect additional opportunistic resights. Approximately two years of additional bimonthly surveys are needed to validate the direct count as an appropriate index of population abundance. Additional years of individual resight data will allow estimates of adult population size, as specified in recovery criteria, and to track species population dynamics at Midway Atoll.

  14. Development of a comprehensive software engineering environment

    NASA Technical Reports Server (NTRS)

    Hartrum, Thomas C.; Lamont, Gary B.

    1987-01-01

    The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.

  15. Lessons Learned from Application of System and Software Level RAMS Analysis to a Space Control System

    NASA Astrophysics Data System (ADS)

    Silva, N.; Esper, A.

    2012-01-01

    The work presented in this article represents the results of applying RAMS analysis to a critical space control system, both at system and software levels. The system level RAMS analysis allowed the assignment of criticalities to the high level components, which was further refined by a tailored software level RAMS analysis. The importance of the software level RAMS analysis in the identification of new failure modes and its impact on the system level RAMS analysis is discussed. Recommendations of changes in the software architecture have also been proposed in order to reduce the criticality of the SW components to an acceptable minimum. The dependability analysis was performed in accordance to ECSS-Q-ST-80, which had to be tailored and complemented in some aspects. This tailoring will also be detailed in the article and lessons learned from the application of this tailoring will be shared, stating the importance to space systems safety evaluations. The paper presents the applied techniques, the relevant results obtained, the effort required for performing the tasks and the planned strategy for ROI estimation, as well as the soft skills required and acquired during these activities.

  16. Design of a modular digital computer system DRL 4 and 5. [design of airborne/spaceborne computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Design and development efforts for a spaceborne modular computer system are reported. An initial baseline description is followed by an interface design that includes definition of the overall system response to all classes of failure. Final versions for the register level designs for all module types were completed. Packaging, support and control executive software, including memory utilization estimates and design verification plan, were formalized to insure a soundly integrated design of the digital computer system.

  17. LOGDIS (Logistics Data Integration System): Productivity evaluation report. [Logistics Data Integration System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, E.E.; Abrahams, S.; Banks, W.W. Jr.

    1987-08-26

    This report presents the results of an end-user productivity study associated with the implementation of LOGDIS (Logistics Data Integration System), a prototype LLNL Gateway system. Its purpose is to provide AFLC decision-makers with data pertaining to estimates of any productivity changes that can be expected or realized by the use of advanced technology and applications software. The effort described in this report focuses on objective functional user productivity and performance changes which can be attributed directly and objectively to LOGDIS.

  18. Formal Analysis of the Remote Agent Before and After Flight

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Lowry, Mike; Park, SeungJoon; Pecheur, Charles; Penix, John; Visser, Willem; White, Jon L.

    2000-01-01

    This paper describes two separate efforts that used the SPIN model checker to verify deep space autonomy flight software. The first effort occurred at the beginning of a spiral development process and found five concurrency errors early in the design cycle that the developers acknowledge would not have been found through testing. This effort required a substantial manual modeling effort involving both abstraction and translation from the prototype LISP code to the PROMELA language used by SPIN. This experience and others led to research to address the gap between formal method tools and the development cycle used by software developers. The Java PathFinder tool which directly translates from Java to PROMELA was developed as part of this research, as well as automatic abstraction tools. In 1999 the flight software flew on a space mission, and a deadlock occurred in a sibling subsystem to the one which was the focus of the first verification effort. A second quick-response "cleanroom" verification effort found the concurrency error in a short amount of time. The error was isomorphic to one of the concurrency errors found during the first verification effort. The paper demonstrates that formal methods tools can find concurrency errors that indeed lead to loss of spacecraft functions, even for the complex software required for autonomy. Second, it describes progress in automatic translation and abstraction that eventually will enable formal methods tools to be inserted directly into the aerospace software development cycle.

  19. Calculation and use of an environment's characteristic software metric set

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    Since both cost/quality and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. The approach is applied in the Software Engineering Laboratory (SEL), a NASA Goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 to 112,000 lines) projects. For this particular environment, the method yielded the characteristic metric set (source lines, fault correction effort per executable statement, design effort, code effort, number of I/O parameters, number of versions). The uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data.

  20. Announcing a Community Effort to Create an Information Model for Research Software Archives

    NASA Astrophysics Data System (ADS)

    Million, C.; Brazier, A.; King, T.; Hayes, A.

    2018-04-01

    An effort has started to create recommendations and standards for the archiving of planetary science research software. The primary goal is to define an information model that is consistent with OAIS standards.

  1. Genetic Programming as Alternative for Predicting Development Effort of Individual Software Projects

    PubMed Central

    Chavoya, Arturo; Lopez-Martin, Cuauhtemoc; Andalon-Garcia, Irma R.; Meda-Campaña, M. E.

    2012-01-01

    Statistical and genetic programming techniques have been used to predict the software development effort of large software projects. In this paper, a genetic programming model was used for predicting the effort required in individually developed projects. Accuracy obtained from a genetic programming model was compared against one generated from the application of a statistical regression model. A sample of 219 projects developed by 71 practitioners was used for generating the two models, whereas another sample of 130 projects developed by 38 practitioners was used for validating them. The models used two kinds of lines of code as well as programming language experience as independent variables. Accuracy results from the model obtained with genetic programming suggest that it could be used to predict the software development effort of individual projects when these projects have been developed in a disciplined manner within a development-controlled environment. PMID:23226305

  2. Practical Methods for Estimating Software Systems Fault Content and Location

    NASA Technical Reports Server (NTRS)

    Nikora, A.; Schneidewind, N.; Munson, J.

    1999-01-01

    Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.

  3. Modular Software for Spacecraft Navigation Using the Global Positioning System (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, S. H.; Hartman, K. R.; Weidow, D. A.; Berry, D. L.; Oza, D. H.; Long, A. C.; Joyce, E.; Steger, W. L.

    1996-01-01

    The Goddard Space Flight Center Flight Dynamics and Mission Operations Divisions have jointly investigated the feasibility of engineering modular Global Positioning SYSTEM (GPS) navigation software to support both real time flight and ground postprocessing configurations. The goals of this effort are to define standard GPS data interfaces and to engineer standard, reusable navigation software components that can be used to build a broad range of GPS navigation support applications. The paper discusses the GPS modular software (GMOD) system and operations concepts, major requirements, candidate software architecture, feasibility assessment and recommended software interface standards. In additon, ongoing efforts to broaden the scope of the initial study and to develop modular software to support autonomous navigation using GPS are addressed,

  4. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  5. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  6. Learning from examples - Generation and evaluation of decision trees for software resource analysis

    NASA Technical Reports Server (NTRS)

    Selby, Richard W.; Porter, Adam A.

    1988-01-01

    A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.

  7. Organizational management practices for achieving software process improvement

    NASA Technical Reports Server (NTRS)

    Kandt, Ronald Kirk

    2004-01-01

    The crisis in developing software has been known for over thirty years. Problems that existed in developing software in the early days of computing still exist today. These problems include the delivery of low-quality products, actual development costs that exceed expected development costs, and actual development time that exceeds expected development time. Several solutions have been offered to overcome out inability to deliver high-quality software, on-time and within budget. One of these solutions involves software process improvement. However, such efforts often fail because of organizational management issues. This paper discusses business practices that organizations should follow to improve their chances of initiating and sustaining successful software process improvement efforts.

  8. Assuring Software Cost Estimates: Is it an Oxymoron?

    NASA Technical Reports Server (NTRS)

    Hihn, Jarius; Tregre, Grant

    2013-01-01

    The software industry repeatedly observes cost growth of well over 100% even after decades of cost estimation research and well-known best practices, so "What's the problem?" In this paper we will provide an overview of the current state oj software cost estimation best practice. We then explore whether applying some of the methods used in software assurance might improve the quality of software cost estimates. This paper especially focuses on issues associated with model calibration, estimate review, and the development and documentation of estimates as part alan integrated plan.

  9. OpenSim: open-source software to create and analyze dynamic simulations of movement.

    PubMed

    Delp, Scott L; Anderson, Frank C; Arnold, Allison S; Loan, Peter; Habib, Ayman; John, Chand T; Guendelman, Eran; Thelen, Darryl G

    2007-11-01

    Dynamic simulations of movement allow one to study neuromuscular coordination, analyze athletic performance, and estimate internal loading of the musculoskeletal system. Simulations can also be used to identify the sources of pathological movement and establish a scientific basis for treatment planning. We have developed a freely available, open-source software system (OpenSim) that lets users develop models of musculoskeletal structures and create dynamic simulations of a wide variety of movements. We are using this system to simulate the dynamics of individuals with pathological gait and to explore the biomechanical effects of treatments. OpenSim provides a platform on which the biomechanics community can build a library of simulations that can be exchanged, tested, analyzed, and improved through a multi-institutional collaboration. Developing software that enables a concerted effort from many investigators poses technical and sociological challenges. Meeting those challenges will accelerate the discovery of principles that govern movement control and improve treatments for individuals with movement pathologies.

  10. Steps toward a CONUS-wide reanalysis with archived NEXRAD data using National Mosaic and Multisensor Quantitative Precipitation Estimation (NMQ/Q2) algorithms

    NASA Astrophysics Data System (ADS)

    Stevens, S. E.; Nelson, B. R.; Langston, C.; Qi, Y.

    2012-12-01

    The National Mosaic and Multisensor QPE (NMQ/Q2) software suite, developed at NOAA's National Severe Storms Laboratory (NSSL) in Norman, OK, addresses a large deficiency in the resolution of currently archived precipitation datasets. Current standards, both radar- and satellite-based, provide for nationwide precipitation data with a spatial resolution of up to 4-5 km, with a temporal resolution as fine as one hour. Efforts are ongoing to process archived NEXRAD data for the period of record (1996 - present), producing a continuous dataset providing precipitation data at a spatial resolution of 1 km, on a timescale of only five minutes. In addition, radar-derived precipitation data are adjusted hourly using a wide variety of automated gauge networks spanning the United States. Applications for such a product range widely, from emergency management and flash flood guidance, to hydrological studies and drought monitoring. Results are presented from a subset of the NEXRAD dataset, providing basic statistics on the distribution of rainrates, relative frequency of precipitation types, and several other variables which demonstrate the variety of output provided by the software. Precipitation data from select case studies are also presented to highlight the increased resolution provided by this reanalysis and the possibilities that arise from the availability of data on such fine scales. A previously completed pilot project and steps toward a nationwide implementation are presented along with proposed strategies for managing and processing such a large dataset. Reprocessing efforts span several institutions in both North Carolina and Oklahoma, and data/software coordination are key in producing a homogeneous record of precipitation to be archived alongside NOAA's other Climate Data Records. Methods are presented for utilizing supercomputing capability in expediting processing, to allow for the iterative nature of a reanalysis effort.

  11. Proceedings of the Thirteenth Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Topics covered in the workshop included studies and experiments conducted in the Software Engineering Laboratory (SEL), a cooperative effort of NASA Goddard Space Flight Center, the University of Maryland, and Computer Sciences Corporation; software models; software products; and software tools.

  12. Language and Program for Documenting Software Design

    NASA Technical Reports Server (NTRS)

    Kleine, H.; Zepko, T. M.

    1986-01-01

    Software Design and Documentation Language (SDDL) provides effective communication medium to support design and documentation of complex software applications. SDDL supports communication among all members of software design team and provides for production of informative documentation on design effort. Use of SDDL-generated document to analyze design makes it possible to eliminate many errors not detected until coding and testing attempted. SDDL processor program translates designer's creative thinking into effective document for communication. Processor performs as many automatic functions as possible, freeing designer's energy for creative effort. SDDL processor program written in PASCAL.

  13. ShakeMap manual: technical manual, user's guide, and software guide

    USGS Publications Warehouse

    Wald, David J.; Worden, Bruce C.; Quitoriano, Vincent; Pankow, Kris L.

    2005-01-01

    ShakeMap (http://earthquake.usgs.gov/shakemap) --rapidly, automatically generated shaking and intensity maps--combines instrumental measurements of shaking with information about local geology and earthquake location and magnitude to estimate shaking variations throughout a geographic area. The results are rapidly available via the Web through a variety of map formats, including Geographic Information System (GIS) coverages. These maps have become a valuable tool for emergency response, public information, loss estimation, earthquake planning, and post-earthquake engineering and scientific analyses. With the adoption of ShakeMap as a standard tool for a wide array of users and uses came an impressive demand for up-to-date technical documentation and more general guidelines for users and software developers. This manual is meant to address this need. ShakeMap, and associated Web and data products, are rapidly evolving as new advances in communications, earthquake science, and user needs drive improvements. As such, this documentation is organic in nature. We will make every effort to keep it current, but undoubtedly necessary changes in operational systems take precedence over producing and making documentation publishable.

  14. Leveraging object-oriented development at Ames

    NASA Technical Reports Server (NTRS)

    Wenneson, Greg; Connell, John

    1994-01-01

    This paper presents lessons learned by the Software Engineering Process Group (SEPG) from results of supporting two projects at NASA Ames using an Object Oriented Rapid Prototyping (OORP) approach supported by a full featured visual development environment. Supplemental lessons learned from a large project in progress and a requirements definition are also incorporated. The paper demonstrates how productivity gains can be made by leveraging the developer with a rich development environment, correct and early requirements definition using rapid prototyping, and earlier and better effort estimation and software sizing through object-oriented methods and metrics. Although the individual elements of OO methods, RP approach and OO metrics had been used on other separate projects, the reported projects were the first integrated usage supported by a rich development environment. Overall the approach used was twice as productive (measured by hours per OO Unit) as a C++ development.

  15. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  16. Top 10 metrics for life science software good practices.

    PubMed

    Artaza, Haydee; Chue Hong, Neil; Corpas, Manuel; Corpuz, Angel; Hooft, Rob; Jimenez, Rafael C; Leskošek, Brane; Olivier, Brett G; Stourac, Jan; Svobodová Vařeková, Radka; Van Parys, Thomas; Vaughan, Daniel

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here.

  17. Top 10 metrics for life science software good practices

    PubMed Central

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here. PMID:27635232

  18. A Change Impact Analysis to Characterize Evolving Program Behaviors

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam; Person, Suzette; Branchaud, Joshua

    2012-01-01

    Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks

  19. Infusing Software Engineering Technology into Practice at NASA

    NASA Technical Reports Server (NTRS)

    Pressburger, Thomas; Feather, Martin S.; Hinchey, Michael; Markosia, Lawrence

    2006-01-01

    We present an ongoing effort of the NASA Software Engineering Initiative to encourage the use of advanced software engineering technology on NASA projects. Technology infusion is in general a difficult process yet this effort seems to have found a modest approach that is successful for some types of technologies. We outline the process and describe the experience of the technology infusions that occurred over a two year period. We also present some lessons from the experiences.

  20. Metric analysis and data validation across FORTRAN projects

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.; Phillips, Tsai-Yun

    1983-01-01

    The desire to predict the effort in developing or explaining the quality of software has led to the proposal of several metrics. As a step toward validating these metrics, the Software Engineering Laboratory (SEL) has analyzed the software science metrics, cyclomatic complexity, and various standard program measures for their relation to effort (including design through acceptance testing), development errors (both discrete and weighted according to the amount of time to locate and fix), and one another. The data investigated are collected from a project FORTRAN environment and examined across several projects at once, within individual projects and by reporting accuracy checks demonstrating the need to validate a database. When the data comes from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither software science's E metric cyclomatic complexity not source lines of code appears to relate convincingly better with effort than the others.

  1. The Business Case for Automated Software Engineering

    NASA Technical Reports Server (NTRS)

    Menzies, Tim; Elrawas, Oussama; Hihn, Jairus M.; Feather, Martin S.; Madachy, Ray; Boehm, Barry

    2007-01-01

    Adoption of advanced automated SE (ASE) tools would be more favored if a business case could be made that these tools are more valuable than alternate methods. In theory, software prediction models can be used to make that case. In practice, this is complicated by the 'local tuning' problem. Normally. predictors for software effort and defects and threat use local data to tune their predictions. Such local tuning data is often unavailable. This paper shows that assessing the relative merits of different SE methods need not require precise local tunings. STAR 1 is a simulated annealer plus a Bayesian post-processor that explores the space of possible local tunings within software prediction models. STAR 1 ranks project decisions by their effects on effort and defects and threats. In experiments with NASA systems. STARI found one project where ASE were essential for minimizing effort/ defect/ threats; and another project were ASE tools were merely optional.

  2. Cost-engineering modeling to support rapid concept development of an advanced infrared satellite system

    NASA Astrophysics Data System (ADS)

    Bell, Kevin D.; Dafesh, Philip A.; Hsu, L. A.; Tsuda, A. S.

    1995-12-01

    Current architectural and design trade techniques often carry unaffordable alternatives late into the decision process. Early decisions made during the concept exploration and development (CE&D) phase will drive the cost of a program more than any other phase of development; thus, designers must be able to assess both the performance and cost impacts of their early choices. The Space Based Infrared System (SBIRS) cost engineering model (CEM) described in this paper is an end-to-end process integrating engineering and cost expertise through commonly available spreadsheet software, allowing for concurrent design engineering and cost estimation to identify and balance system drives to reduce acquisition costs. The automated interconnectivity between subsystem models using spreadsheet software allows for the quick and consistent assessment of the system design impacts and relative cost impacts due to requirement changes. It is different from most CEM efforts attempted in the past as it incorporates more detailed spacecraft and sensor payload models, and has been applied to determine the cost drivers for an advanced infrared satellite system acquisition. The CEM is comprised of integrated detailed engineering and cost estimating relationships describing performance, design, and cost parameters. Detailed models have been developed to evaluate design parameters for the spacecraft bus and sensor; both step-starer and scanner sensor types incorporate models of focal plane array, optics, processing, thermal, communications, and mission performance. The current CEM effort has provided visibility to requirements, design, and cost drivers for system architects and decision makers to determine the configuration of an infrared satellite architecture that meets essential requirements cost effectively. In general, the methodology described in this paper consists of process building blocks that can be tailored to the needs of many applications. Descriptions of the spacecraft and payload subsystem models provide insight into The Aerospace Corporation expertise and scope of the SBIRS concept development effort.

  3. The evolution of CMS software performance studies

    NASA Astrophysics Data System (ADS)

    Kortelainen, M. J.; Elmer, P.; Eulisse, G.; Innocente, V.; Jones, C. D.; Tuura, L.

    2011-12-01

    CMS has had an ongoing and dedicated effort to optimize software performance for several years. Initially this effort focused primarily on the cleanup of many issues coming from basic C++ errors, namely reducing dynamic memory churn, unnecessary copies/temporaries and tools to routinely monitor these things. Over the past 1.5 years, however, the transition to 64bit, newer versions of the gcc compiler, newer tools and the enabling of techniques like vectorization have made possible more sophisticated improvements to the software performance. This presentation will cover this evolution and describe the current avenues being pursued for software performance, as well as the corresponding gains.

  4. The Influence of Software Complexity on the Maintenance Effort: Case Study on Software Developed within Educational Process

    ERIC Educational Resources Information Center

    Radulescu, Iulian Ionut

    2006-01-01

    Software complexity is the most important software quality attribute and a very useful instrument in the study of software quality. Is one of the factors that affect most of the software quality characteristics, including maintainability. It is very important to quantity this influence and identify the means to keep it under control; by using…

  5. The NASA Software Research Infusion Initiative: Successful Technology Transfer for Software Assurance

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Pressburger, Thomas; Markosian, Lawrence; Feather, Martin S.

    2006-01-01

    New processes, methods and tools are constantly appearing in the field of software engineering. Many of these augur great potential in improving software development processes, resulting in higher quality software with greater levels of assurance. However, there are a number of obstacles that impede their infusion into software development practices. These are the recurring obstacles common to many forms of research. Practitioners cannot readily identify the emerging techniques that may most benefit them, and cannot afford to risk time and effort in evaluating and experimenting with them while there is still uncertainty about whether they will have payoff in this particular context. Similarly, researchers cannot readily identify those practitioners whose problems would be amenable to their techniques and lack the feedback from practical applications necessary to help them to evolve their techniques to make them more likely to be successful. This paper describes an ongoing effort conducted by a software engineering research infusion team, and the NASA Research Infusion Initiative, established by NASA s Software Engineering Initiative, to overcome these obstacles.

  6. High Performance Hydrometeorological Modeling, Land Data Assimilation and Parameter Estimation with the Land Information System at NASA/GSFC

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, C. D.; Kumar, S. V.; Santanello, J. A.; Tian, Y.; Rodell, M.; Mocko, D.; Reichle, R.

    2008-12-01

    The Land Information System (LIS; http://lis.gsfc.nasa.gov; Kumar et al., 2006; Peters-Lidard et al., 2007) is a flexible land surface modeling framework that has been developed with the goal of integrating satellite- and ground-based observational data products and advanced land surface modeling techniques to produce optimal fields of land surface states and fluxes. The LIS software was the co-winner of NASA's 2005 Software of the Year award. LIS facilitates the integration of observations from Earth-observing systems and predictions and forecasts from Earth System and Earth science models into the decision-making processes of partnering agency and national organizations. Due to its flexible software design, LIS can serve both as a Problem Solving Environment (PSE) for hydrologic research to enable accurate global water and energy cycle predictions, and as a Decision Support System (DSS) to generate useful information for application areas including disaster management, water resources management, agricultural management, numerical weather prediction, air quality and military mobility assessment. LIS has evolved from two earlier efforts - North American Land Data Assimilation System (NLDAS; Mitchell et al. 2004) and Global Land Data Assimilation System (GLDAS; Rodell et al. 2004) that focused primarily on improving numerical weather prediction skills by improving the characterization of the land surface conditions. Both of these systems, now use specific configurations of the LIS software in their current implementations. LIS not only consolidates the capabilities of these two systems, but also enables a much larger variety of configurations with respect to horizontal spatial resolution, input datasets and choice of land surface model through 'plugins'. In addition to these capabilities, LIS has also been demonstrated for parameter estimation (Peters-Lidard et al., 2008; Santanello et al., 2007) and data assimilation (Kumar et al., 2008). Examples and case studies demonstrating the capabilities and impacts of LIS for hydrometeorological modeling, land data assimilation and parameter estimation will be presented.

  7. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  8. DAISY: a new software tool to test global identifiability of biological and physiological systems.

    PubMed

    Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D'Angiò, Leontina

    2007-10-01

    A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/.

  9. An Uncertainty Structure Matrix for Models and Simulations

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Blattnig, Steve R.; Hemsch, Michael J.; Luckring, James M.; Tripathi, Ram K.

    2008-01-01

    Software that is used for aerospace flight control and to display information to pilots and crew is expected to be correct and credible at all times. This type of software is typically developed under strict management processes, which are intended to reduce defects in the software product. However, modeling and simulation (M&S) software may exhibit varying degrees of correctness and credibility, depending on a large and complex set of factors. These factors include its intended use, the known physics and numerical approximations within the M&S, and the referent data set against which the M&S correctness is compared. The correctness and credibility of an M&S effort is closely correlated to the uncertainty management (UM) practices that are applied to the M&S effort. This paper describes an uncertainty structure matrix for M&S, which provides a set of objective descriptions for the possible states of UM practices within a given M&S effort. The columns in the uncertainty structure matrix contain UM elements or practices that are common across most M&S efforts, and the rows describe the potential levels of achievement in each of the elements. A practitioner can quickly look at the matrix to determine where an M&S effort falls based on a common set of UM practices that are described in absolute terms that can be applied to virtually any M&S effort. The matrix can also be used to plan those steps and resources that would be needed to improve the UM practices for a given M&S effort.

  10. SCaN Testbed Software Development and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Varga, Denise M.

    2012-01-01

    National Aeronautics and Space Administration (NASA) has developed an on-orbit, adaptable, Software Defined Radio (SDR)Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The SCAN Testbed Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, SDR platforms and the STRS Architecture.The SDRs are a new technology for NASA, and the support infrastructure they require is different from legacy, fixed function radios. SDRs offer the ability to reconfigure on-orbit communications by changing software for new waveforms and operating systems to enable new capabilities or fix any anomalies, which was not a previous option. They are not stand alone devices, but required a new approach to effectively control them and flow data. This requires extensive software to be developed to utilize the full potential of these reconfigurable platforms. The paper focuses on development, integration and testing as related to the avionics processor system, and the software required to command, control, monitor, and interact with the SDRs, as well as the other communication payload elements. An extensive effort was required to develop the flight software and meet the NASA requirements for software quality and safety. The flight avionics must be radiation tolerant, and these processors have limited capability in comparison to terrestrial counterparts. A big challenge was that there are three SDRs onboard, and interfacing with multiple SDRs simultaneously complicatesd the effort. The effort also includes ground software, which is a key element for both the command of the payload, and displaying data created by the payload. The verification of the software was an extensive effort. The challenges of specifying a suitable test matrix with reconfigurable systems that offer numerous configurations is highlighted. Since the flight system testing requires methodical, controlled testing that limits risk, a nearly identical ground system to the on-orbit flight system was required to develop the software and write verification procedures before it was installed and tested on the flight system. The development of the SCAN testbed was an accelerated effort to meet launch constraints, and this paper discusses tradeoffs made to balance needed software functionality and still maintain the schedule. Future upgrades are discussed that optimize the avionics and allow experimenters to utilize the SCAN testbed potential.

  11. Managing configuration software of ground software applications with glueware

    NASA Technical Reports Server (NTRS)

    Larsen, B.; Herrera, R.; Sesplaukis, T.; Cheng, L.; Sarrel, M.

    2003-01-01

    This paper reports on a simple, low-cost effort to streamline the configuration of the uplink software tools. Even though the existing ground system consisted of JPL and custom Cassini software rather than COTS, we chose a glueware approach--reintegrating with wrappers and bridges and adding minimal new functionality.

  12. Myokit: A simple interface to cardiac cellular electrophysiology.

    PubMed

    Clerx, Michael; Collins, Pieter; de Lange, Enno; Volders, Paul G A

    2016-01-01

    Myokit is a new powerful and versatile software tool for modeling and simulation of cardiac cellular electrophysiology. Myokit consists of an easy-to-read modeling language, a graphical user interface, single and multi-cell simulation engines and a library of advanced analysis tools accessible through a Python interface. Models can be loaded from Myokit's native file format or imported from CellML. Model export is provided to C, MATLAB, CellML, CUDA and OpenCL. Patch-clamp data can be imported and used to estimate model parameters. In this paper, we review existing tools to simulate the cardiac cellular action potential to find that current tools do not cater specifically to model development and that there is a gap between easy-to-use but limited software and powerful tools that require strong programming skills from their users. We then describe Myokit's capabilities, focusing on its model description language, simulation engines and import/export facilities in detail. Using three examples, we show how Myokit can be used for clinically relevant investigations, multi-model testing and parameter estimation in Markov models, all with minimal programming effort from the user. This way, Myokit bridges a gap between performance, versatility and user-friendliness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A Methodology for the Analysis of Programmer Productivity and Effort Estimation within the Framework of Software Conversion.

    DTIC Science & Technology

    1984-05-01

    TEST CH~ART NAT ONAL BQREAu Or s T AADS-963-’ 82 The coefficient of KCA(programmer’s knowledge of the program) initially seemed to be an error...productivity, as expected. An interesting manifestation, supporting a discovery by Oliver, was exhibited by the rating of a programmer’s knowledge of...ACCESSION NO. 3. RECIPIENT’S CATALOG NUM13ER 10 AFIT/CI/NR 84-44D _ _ ___)___________ 4. TITLE (.,d S ..benli) PR#fQP6rV/rV S . TYPE OF REPORT A PERIOD

  14. Provably trustworthy systems.

    PubMed

    Klein, Gerwin; Andronick, June; Keller, Gabriele; Matichuk, Daniel; Murray, Toby; O'Connor, Liam

    2017-10-13

    We present recent work on building and scaling trustworthy systems with formal, machine-checkable proof from the ground up, including the operating system kernel, at the level of binary machine code. We first give a brief overview of the seL4 microkernel verification and how it can be used to build verified systems. We then show two complementary techniques for scaling these methods to larger systems: proof engineering, to estimate verification effort; and code/proof co-generation, for scalable development of provably trustworthy applications.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).

  15. Ground System Harmonization Efforts at NASA's Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Smith, Dan

    2011-01-01

    This slide presentation reviews the efforts made at Goddard Space Flight Center in harmonizing the ground systems to assist in collaboration in space ventures. The key elements of this effort are: (1) Moving to a Common Framework (2) Use of Consultative Committee for Space Data Systems (CCSDS) Standards (3) Collaboration Across NASA Centers (4) Collaboration Across Industry and other Space Organizations. These efforts are working to bring into harmony the GSFC systems with CCSDS standards to allow for common software, use of Commercial Off the Shelf Software and low risk development and operations and also to work toward harmonization with other NASA centers

  16. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  17. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  18. Communication and Organization in Software Development: An Empirical Study

    NASA Technical Reports Server (NTRS)

    Seaman, Carolyn B.; Basili, Victor R.

    1996-01-01

    The empirical study described in this paper addresses the issue of communication among members of a software development organization. The independent variables are various attributes of organizational structure. The dependent variable is the effort spent on sharing information which is required by the software development process in use. The research questions upon which the study is based ask whether or not these attributes of organizational structure have an effect on the amount of communication effort expended. In addition, there are a number of blocking variables which have been identified. These are used to account for factors other than organizational structure which may have an effect on communication effort. The study uses both quantitative and qualitative methods for data collection and analysis. These methods include participant observation, structured interviews, and graphical data presentation. The results of this study indicate that several attributes of organizational structure do affect communication effort, but not in a simple, straightforward way. In particular, the distances between communicators in the reporting structure of the organization, as well as in the physical layout of offices, affects how quickly they can share needed information, especially during meetings. These results provide a better understanding of how organizational structure helps or hinders communication in software development.

  19. WFF TOPEX Software Documentation Overview, May 1999. Volume 2

    NASA Technical Reports Server (NTRS)

    Brooks, Ronald L.; Lee, Jeffrey

    2003-01-01

    This document provides an overview'of software development activities and the resulting products and procedures developed by the TOPEX Software Development Team (SWDT) at Wallops Flight Facility, in support of the WFF TOPEX Engineering Assessment and Verification efforts.

  20. WISE: Automated support for software project management and measurement. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Sudhakar

    1995-01-01

    One important aspect of software development and IV&V is measurement. Unless a software development effort is measured in some way, it is difficult to judge the effectiveness of current efforts and predict future performances. Collection of metrics and adherence to a process are difficult tasks in a software project. Change activity is a powerful indicator of project status. Automated systems that can handle change requests, issues, and other process documents provide an excellent platform for tracking the status of the project. A World Wide Web based architecture is developed for (a) making metrics collection an implicit part of the software process, (b) providing metric analysis dynamically, (c) supporting automated tools that can complement current practices of in-process improvement, and (d) overcoming geographical barrier. An operational system (WISE) instantiates this architecture allowing for the improvement of software process in a realistic environment. The tool tracks issues in software development process, provides informal communication between the users with different roles, supports to-do lists (TDL), and helps in software process improvement. WISE minimizes the time devoted to metrics collection, analysis, and captures software change data. Automated tools like WISE focus on understanding and managing the software process. The goal is improvement through measurement.

  1. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  2. Government Technology Acquisition Policy: The Case of Proprietary versus Open Source Software

    ERIC Educational Resources Information Center

    Hemphill, Thomas A.

    2005-01-01

    This article begins by explaining the concepts of proprietary and open source software technology, which are now competing in the marketplace. A review of recent individual and cooperative technology development and public policy advocacy efforts, by both proponents of open source software and advocates of proprietary software, subsequently…

  3. NASA software specification and evaluation system design, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The research to develop methods for reducing the effort expended in software and verification is reported. The development of a formal software requirements methodology, a formal specifications language, a programming language, a language preprocessor, and code analysis tools are discussed.

  4. Fairbanks North Star borough rural roads upgrade inventory and cost estimation software user guide : version I.

    DOT National Transportation Integrated Search

    2013-04-01

    The Rural Road Upgrade Inventory and Cost Estimation Software is designed by the AUTC : research team to help the Fairbanks North Star Borough (FNSB) estimate the cost of upgrading : rural roads located in the Borough's Service Areas. The Software pe...

  5. Element Load Data Processor (ELDAP) Users Manual

    NASA Technical Reports Server (NTRS)

    Ramsey, John K., Jr.; Ramsey, John K., Sr.

    2015-01-01

    Often, the shear and tensile forces and moments are extracted from finite element analyses to be used in off-line calculations for evaluating the integrity of structural connections involving bolts, rivets, and welds. Usually the maximum forces and moments are desired for use in the calculations. In situations where there are numerous structural connections of interest for numerous load cases, the effort in finding the true maximum force and/or moment combinations among all fasteners and welds and load cases becomes difficult. The Element Load Data Processor (ELDAP) software described herein makes this effort manageable. This software eliminates the possibility of overlooking the worst-case forces and moments that could result in erroneous positive margins of safety and/or selecting inconsistent combinations of forces and moments resulting in false negative margins of safety. In addition to forces and moments, any scalar quantity output in a PATRAN report file may be evaluated with this software. This software was originally written to fill an urgent need during the structural analysis of the Ares I-X Interstage segment. As such, this software was coded in a straightforward manner with no effort made to optimize or minimize code or to develop a graphical user interface.

  6. Teaching Introductory Astronomy "Open and Out" & Looking Forward to the 2017 Solar Eclipse

    NASA Astrophysics Data System (ADS)

    Chu, I.-Wen Mike; Cronkhite, Jeff

    2016-01-01

    We present a new effort on teaching introductory astronomy addressing the specific challenges facing small colleges including limited resources, changing generational behavior and new technological trends. The approach adopts open source solutions into the developmental learning materials aiming for standardization and wide-scale applicability. In addition we utilize events and resources outside classroom into the learning. Among examples of the development are laboratory exercises based on the planetarium software Stellarium and remediation exercises using Khan Academy instructional videos. As the eventual goal is to move toward greater autonomy the cycles of improvement necessarily require student feedback in an entirely different instructional style based on egalitarian dialogues. We highlight a laboratory exercise on Earth-Moon distance estimation using parallax of the upcoming 2017 solar eclipse to illustrate the "open and out" philosophy. Achievements, limitations and some diagnostics of the current effort are also presented.

  7. Recent Progress Towards Predicting Aircraft Ground Handling Performance

    NASA Technical Reports Server (NTRS)

    Yager, T. J.; White, E. J.

    1981-01-01

    The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.

  8. New inverse synthetic aperture radar algorithm for translational motion compensation

    NASA Astrophysics Data System (ADS)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  9. Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.

    The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less

  10. Overview of the TriBITS Lifecycle Model: Lean/Agile Software Lifecycle Model for Research-based Computational Science and Engineering Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less

  11. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2002-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...and global water column. 1 OBJECTIVES The project’s objective is to develop automated image analysis software to reduce the effort and time

  12. Use of a quality improvement tool, the prioritization matrix, to identify and prioritize triage software algorithm enhancement.

    PubMed

    North, Frederick; Varkey, Prathiba; Caraballo, Pedro; Vsetecka, Darlene; Bartel, Greg

    2007-10-11

    Complex decision support software can require significant effort in maintenance and enhancement. A quality improvement tool, the prioritization matrix, was successfully used to guide software enhancement of algorithms in a symptom assessment call center.

  13. Complexity Measure for the Prototype System Description Language (PSDL)

    DTIC Science & Technology

    2002-06-01

    Albrecht, A. and Gaffney , J., Software Function Source Lines of Code and Development Effort Prediction, IEEE Transactions on Software Engineering...Through Meausrement”; Proceedings of the IEEE, Vol. 77, No. 4, April 89. Schach, Stephen, R., Software Engineering, Second Edition, IRWIN, Burr Ridge

  14. Dose Estimating Application Software Modification: Additional Function of a Size-Specific Effective Dose Calculator and Auto Exposure Control.

    PubMed

    Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi

    2017-05-01

    Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. The Computational Infrastructure for Geodynamics as a Community of Practice

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2016-12-01

    Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.

  16. Development of Methodologies for the Estimation of Thermal Properties Associated with Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1996-01-01

    A thermal stress analysis is an important aspect in the design of aerospace structures and vehicles such as the High Speed Civil Transport (HSCT) at the National Aeronautics and Space Administration Langley Research Center (NASA-LaRC). These structures are complex and are often composed of numerous components fabricated from a variety of different materials. The thermal loads on these structures induce temperature variations within the structure, which in turn result in the development of thermal stresses. Therefore, a thermal stress analysis requires knowledge of the temperature distributions within the structures which consequently necessitates the need for accurate knowledge of the thermal properties, boundary conditions and thermal interface conditions associated with the structural materials. The goal of this proposed multi-year research effort was to develop estimation methodologies for the determination of the thermal properties and interface conditions associated with aerospace vehicles. Specific objectives focused on the development and implementation of optimal experimental design strategies and methodologies for the estimation of thermal properties associated with simple composite and honeycomb structures. The strategy used in this multi-year research effort was to first develop methodologies for relatively simple systems and then systematically modify these methodologies to analyze complex structures. This can be thought of as a building block approach. This strategy was intended to promote maximum usability of the resulting estimation procedure by NASA-LARC researchers through the design of in-house experimentation procedures and through the use of an existing general purpose finite element software.

  17. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…

  18. Space Station communications and tracking systems modeling and RF link simulation

    NASA Technical Reports Server (NTRS)

    Tsang, Chit-Sang; Chie, Chak M.; Lindsey, William C.

    1986-01-01

    In this final report, the effort spent on Space Station Communications and Tracking System Modeling and RF Link Simulation is described in detail. The effort is mainly divided into three parts: frequency division multiple access (FDMA) system simulation modeling and software implementation; a study on design and evaluation of a functional computerized RF link simulation/analysis system for Space Station; and a study on design and evaluation of simulation system architecture. This report documents the results of these studies. In addition, a separate User's Manual on Space Communications Simulation System (SCSS) (Version 1) documents the software developed for the Space Station FDMA communications system simulation. The final report, SCSS user's manual, and the software located in the NASA JSC system analysis division's VAX 750 computer together serve as the deliverables from LinCom for this project effort.

  19. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  20. DAISY: a new software tool to test global identifiability of biological and physiological systems

    PubMed Central

    Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D’Angiò, Leontina

    2009-01-01

    A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/. PMID:17707944

  1. Integrated optomechanical analysis and testing software development at MIT Lincoln Laboratory

    NASA Astrophysics Data System (ADS)

    Stoeckel, Gerhard P.; Doyle, Keith B.

    2013-09-01

    Advanced analytical software capabilities are being developed to advance the design of prototypical hardware in the Engineering Division at MIT Lincoln Laboratory. The current effort is focused on the integration of analysis tools tailored to the work flow, organizational structure, and current technology demands. These tools are being designed to provide superior insight into the interdisciplinary behavior of optical systems and enable rapid assessment and execution of design trades to optimize the design of optomechanical systems. The custom software architecture is designed to exploit and enhance the functionality of existing industry standard commercial software, provide a framework for centralizing internally developed tools, and deliver greater efficiency, productivity, and accuracy through standardization, automation, and integration. Specific efforts have included the development of a feature-rich software package for Structural-Thermal-Optical Performance (STOP) modeling, advanced Line Of Sight (LOS) jitter simulations, and improved integration of dynamic testing and structural modeling.

  2. Infusing Software Assurance Research Techniques into Use

    NASA Technical Reports Server (NTRS)

    Pressburger, Thomas; DiVito, Ben; Feather, Martin S.; Hinchey, Michael; Markosian, Lawrence; Trevino, Luis C.

    2006-01-01

    Research in the software engineering community continues to lead to new development techniques that encompass processes, methods and tools. However, a number of obstacles impede their infusion into software development practices. These are the recurring obstacles common to many forms of research. Practitioners cannot readily identify the emerging techniques that may benefit them, and cannot afford to risk time and effort evaluating and trying one out while there remains uncertainty about whether it will work for them. Researchers cannot readily identify the practitioners whose problems would be amenable to their techniques, and, lacking feedback from practical applications, are hard-pressed to gauge the where and in what ways to evolve their techniques to make them more likely to be successful. This paper describes an ongoing effort conducted by a software engineering research infusion team established by NASA s Software Engineering Initiative to overcome these obstacles. .

  3. Automating Software Design Metrics.

    DTIC Science & Technology

    1984-02-01

    INTRODUCTION 1 ", ... 0..1 1.2 HISTORICAL PERSPECTIVE High quality software is of interest to both the software engineering com- munity and its users. As...contributions of many other software engineering efforts, most notably [MCC 77] and [Boe 83b], which have defined and refined a framework for quantifying...AUTOMATION OF DESIGN METRICS Software metrics can be useful within the context of an integrated soft- ware engineering environment. The purpose of this

  4. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  5. A conceptual model to empower software requirements conflict detection and resolution with rule-based reasoning

    NASA Astrophysics Data System (ADS)

    Ahmad, Sabrina; Jalil, Intan Ermahani A.; Ahmad, Sharifah Sakinah Syed

    2016-08-01

    It is seldom technical issues which impede the process of eliciting software requirements. The involvement of multiple stakeholders usually leads to conflicts and therefore the need of conflict detection and resolution effort is crucial. This paper presents a conceptual model to further improve current efforts. Hence, this paper forwards an improved conceptual model to assist the conflict detection and resolution effort which extends the model ability and improves overall performance. The significant of the new model is to empower the automation of conflicts detection and its severity level with rule-based reasoning.

  6. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  7. The Large Synoptic Survey Telescope project management control system

    NASA Astrophysics Data System (ADS)

    Kantor, Jeffrey P.

    2012-09-01

    The Large Synoptic Survey Telescope (LSST) program is jointly funded by the NSF, the DOE, and private institutions and donors. From an NSF funding standpoint, the LSST is a Major Research Equipment and Facilities (MREFC) project. The NSF funding process requires proposals and D&D reviews to include activity-based budgets and schedules; documented basis of estimates; risk-based contingency analysis; cost escalation and categorization. "Out-of-the box," the commercial tool Primavera P6 contains approximately 90% of the planning and estimating capability needed to satisfy R&D phase requirements, and it is customizable/configurable for remainder with relatively little effort. We describe the customization/configuration and use of Primavera for the LSST Project Management Control System (PMCS), assess our experience to date, and describe future directions. Examples in this paper are drawn from the LSST Data Management System (DMS), which is one of three main subsystems of the LSST and is funded by the NSF. By astronomy standards the LSST DMS is a large data management project, processing and archiving over 70 petabyes of image data, producing over 20 petabytes of catalogs annually, and generating 2 million transient alerts per night. Over the 6-year construction and commissioning phase, the DM project is estimated to require 600,000 hours of engineering effort. In total, the DMS cost is approximately 60% hardware/system software and 40% labor.

  8. WE-G-BRA-02: SafetyNet: Automating Radiotherapy QA with An Event Driven Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, S; Kessler, M; Litzenberg, D

    2015-06-15

    Purpose: Quality assurance is an essential task in radiotherapy that often requires many manual tasks. We investigate the use of an event driven framework in conjunction with software agents to automate QA and eliminate wait times. Methods: An in house developed subscription-publication service, EventNet, was added to the Aria OIS to be a message broker for critical events occurring in the OIS and software agents. Software agents operate without user intervention and perform critical QA steps. The results of the QA are documented and the resulting event is generated and passed back to EventNet. Users can subscribe to those eventsmore » and receive messages based on custom filters designed to send passing or failing results to physicists or dosimetrists. Agents were developed to expedite the following QA tasks: Plan Revision, Plan 2nd Check, SRS Winston-Lutz isocenter, Treatment History Audit, Treatment Machine Configuration. Results: Plan approval in the Aria OIS was used as the event trigger for plan revision QA and Plan 2nd check agents. The agents pulled the plan data, executed the prescribed QA, stored the results and updated EventNet for publication. The Winston Lutz agent reduced QA time from 20 minutes to 4 minutes and provided a more accurate quantitative estimate of radiation isocenter. The Treatment Machine Configuration agent automatically reports any changes to the Treatment machine or HDR unit configuration. The agents are reliable, act immediately, and execute each task identically every time. Conclusion: An event driven framework has inverted the data chase in our radiotherapy QA process. Rather than have dosimetrists and physicists push data to QA software and pull results back into the OIS, the software agents perform these steps immediately upon receiving the sentinel events from EventNet. Mr Keranen is an employee of Varian Medical Systems. Dr. Moran’s institution receives research support for her effort for a linear accelerator QA project from Varian Medical Systems. Other quality projects involving her effort are funded by Blue Cross Blue Shield of Michigan, Breast Cancer Research Foundation, and the NIH.« less

  9. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  10. Ruby on Rails Issue Tracker

    NASA Technical Reports Server (NTRS)

    Rodriguez, Juan Jared

    2014-01-01

    The purpose of this report is to detail the tasks accomplished as a NASA NIFS intern for the summer 2014 session. This internship opportunity is to develop an issue tracker Ruby on Rails web application to improve the communication of developmental anomalies between the Support Software Computer Software Configuration Item (CSCI) teams, System Build and Information Architecture. As many may know software development is an arduous, time consuming, collaborative effort. It involves nearly as much work designing, planning, collaborating, discussing, and resolving issues as effort expended in actual development. This internship opportunity was put in place to help alleviate the amount of time spent discussing issues such as bugs, missing tests, new requirements, and usability concerns that arise during development and throughout the life cycle of software applications once in production.

  11. Continuation of research into software for space operations support, volume 1

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.; Killough, Ronnie; Martin, Nancy L.

    1990-01-01

    A prototype workstation executive called the Hardware Independent Software Development Environment (HISDE) was developed. Software technologies relevant to workstation executives were researched and evaluated and HISDE was used as a test bed for prototyping efforts. New X Windows software concepts and technology were introduced into workstation executives and related applications. The four research efforts performed included: (1) Research into the usability and efficiency of Motif (an X Windows based graphic user interface) which consisted of converting the existing Athena widget based HISDE user interface to Motif demonstrating the usability of Motif and providing insight into the level of effort required to translate an application from widget to another; (2) Prototype a real time data display widget which consisted of research methods for and prototyping the selected method of displaying textual values in an efficient manner; (3) X Windows performance evaluation which consisted of a series of performance measurements which demonstrated the ability of low level X Windows to display textural information; (4) Convert the Display Manager to X Window/Motif which is the application used by NASA for data display during operational mode.

  12. Getting started on metrics - Jet Propulsion Laboratory productivity and quality

    NASA Technical Reports Server (NTRS)

    Bush, M. W.

    1990-01-01

    A review is presented to describe the effort and difficulties of reconstructing fifteen years of JPL software history. In 1987 the collection and analysis of project data were started with the objective of creating laboratory-wide measures of quality and productivity for software development. As a result of this two-year Software Product Assurance metrics study, a rough measurement foundation for software productivity and software quality, and an order-of-magnitude quantitative baseline for software systems and subsystems are now available.

  13. Analysis of patient CT dose data using virtualdose

    NASA Astrophysics Data System (ADS)

    Bennett, Richard

    X-ray computer tomography has many benefits to medical and research applications. Recently, over the last decade CT has had a large increase in usage in hospitals and medical diagnosis. In pediatric care, from 2000 to 2006, abdominal CT scans increased by 49 % and chest CT by 425 % in the emergency room (Broder 2007). Enormous amounts of effort have been performed across multiple academic and government groups to determine an accurate measure of organ dose to patients who undergo a CT scan due to the inherent risks with ionizing radiation. Considering these intrinsic risks, CT dose estimating software becomes a necessary tool that health care providers and radiologist must use to determine many metrics to base the risks versus rewards of having an x-ray CT scan. This thesis models the resultant organ dose as body mass increases for patients with all other related scan parameters fixed. In addition to this,this thesis compares a modern dose estimating software, VirtualDose CT to two other programs, CT-Expo and ImPACT CT. The comparison shows how the software's theoretical basis and the phantom they use to represent the human body affect the range of results in organ dose. CT-Expo and ImPACT CT dose estimating software uses a different model for anatomical representation of the organs in the human body and the results show how that approach dramatically changes the outcome. The results categorizes four datasets as compared to the three software types where the appropriate phantom was available. Modeling was done to simulate chest abdominal pelvis scans and whole body scans. Organ dose difference versus body mass index shows as body mass index (BMI) ranges from 23.5 kg/m 2 to 45 kg/m2 the amount of organ dose also trends a percent change from -4.58 to -176.19 %. Comparing organ dose difference with increasing x-ray tube potential from 120 kVp to 140 kVp the percent change in organ dose increases from 55 % to 65 % across all phantoms. In comparing VirtualDose to CT-Expo for organ dose difference versus age, male phantoms show percent difference of -19 % to 25 % for various organs minus bone surface and breast tissues results. Finally, for organ dose difference across all software for average adult phantom the results range from -45 % to 6 % in the comparison of ImPACT CT to VirtualDose and -27 % to 66 % for the comparison of CT-Expo to VirtualDose. In the comparison for increased BMI (done only in VirtualDose), results show that with all other parameters fixed, the organ dose goes down as BMI increases, which is due to the increase in adipose tissue and bulk of the patient model. The range of results when comparing all the three softwares have a wide range, in some cases greater than 150 %, it is evident that using a different anatomical basis for the human phantom and the theoretical basis for the dose estimation will cause fluctuation in the results. Therefore, choosing the software with the most accurate human phantom will provide a closer range to the true dose to the organ.

  14. Software cost/resource modeling: Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. J.

    1980-01-01

    A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  15. The Software Design Document: More than a User's Manual.

    ERIC Educational Resources Information Center

    Bowers, Dennis

    1989-01-01

    Discusses the value of creating design documentation for computer software so that it may serve as a model for similar design efforts. Components of the software design document are described, including program flowcharts, graphic representation of screen displays, storyboards, and evaluation procedures. An example is given using HyperCard. (three…

  16. Analyzing a Mature Software Inspection Process Using Statistical Process Control (SPC)

    NASA Technical Reports Server (NTRS)

    Barnard, Julie; Carleton, Anita; Stamper, Darrell E. (Technical Monitor)

    1999-01-01

    This paper presents a cooperative effort where the Software Engineering Institute and the Space Shuttle Onboard Software Project could experiment applying Statistical Process Control (SPC) analysis to inspection activities. The topics include: 1) SPC Collaboration Overview; 2) SPC Collaboration Approach and Results; and 3) Lessons Learned.

  17. Collaborative Software and Focused Distraction in the Classroom

    ERIC Educational Resources Information Center

    Rhine, Steve; Bailey, Mark

    2011-01-01

    In search of strategies for increasing their pre-service teachers' thoughtful engagement with content and in an effort to model connection between choice of technology and pedagogical goals, the authors utilized collaborative software during class time. Collaborative software allows all students to write simultaneously on a single collective…

  18. Applying Model-Based Reasoning to the FDIR of the Command and Data Handling Subsystem of the International Space Station

    NASA Technical Reports Server (NTRS)

    Robinson, Peter; Shirley, Mark; Fletcher, Daryl; Alena, Rick; Duncavage, Dan; Lee, Charles

    2003-01-01

    All of the International Space Station (ISS) systems which require computer control depend upon the hardware and software of the Command and Data Handling System (C&DH) system, currently a network of over 30 386-class computers called Multiplexor/Dimultiplexors (MDMs)[18]. The Caution and Warning System (C&W)[7], a set of software tasks that runs on the MDMs, is responsible for detecting, classifying, and reporting errors in all ISS subsystems including the C&DH. Fault Detection, Isolation and Recovery (FDIR) of these errors is typically handled with a combination of automatic and human effort. We are developing an Advanced Diagnostic System (ADS) to augment the C&W system with decision support tools to aid in root cause analysis as well as resolve differing human and machine C&DH state estimates. These tools which draw from sources in model-based reasoning[ 16,291, will improve the speed and accuracy of flight controllers by reducing the uncertainty in C&DH state estimation, allowing for a more complete assessment of risk. We have run tests with ISS telemetry and focus on those C&W events which relate to the C&DH system itself. This paper describes our initial results and subsequent plans.

  19. Evaluating the Usefulness of High-Temporal Resolution Vegetation Indices to Identify Crop Types

    NASA Astrophysics Data System (ADS)

    Hilbert, K.; Lewis, D.; O'Hara, C. G.

    2006-12-01

    The National Aeronautical and Space Agency (NASA) and the United States Department of Agriculture (USDA) jointly sponsored research covering the 2004 to 2006 South American crop seasons that focused on developing methods for the USDA's Foreign Agricultural Service's (FAS) Production Estimates and Crop Assessment Division (PECAD) to identify crop types using MODIS-derived, hyper-temporal Normalized Difference Vegetation Index (NDVI) images. NDVI images were composited in 8 day intervals from daily NDVI images and aggregated to create a hyper-termporal NDVI layerstack. This NDVI layerstack was used as input to image classification algorithms. Research results indicated that creating high-temporal resolution Normalized Difference Vegetation Index (NDVI) composites from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data products provides useful input to crop type classifications as well as potential useful input for regional crop productivity modeling efforts. A current NASA-sponsored Rapid Prototyping Capability (RPC) experiment will assess the utility of simulated future Visible Infrared Imager / Radiometer Suite (VIIRS) imagery for conducting NDVI-derived land cover and specific crop type classifications. In the experiment, methods will be considered to refine current MODIS data streams, reduce the noise content of the MODIS, and utilize the MODIS data as an input to the VIIRS simulation process. The effort also is being conducted in concert with an ISS project that will further evaluate, verify and validate the usefulness of specific data products to provide remote sensing-derived input for the Sinclair Model a semi-mechanistic model for estimating crop yield. The study area encompasses a large portion of the Pampas region of Argentina--a major world producer of crops such as corn, soybeans, and wheat which makes it a competitor to the US. ITD partnered with researchers at the Center for Surveying Agricultural and Natural Resources (CREAN) of the National University of Cordoba, Argentina, and CREAN personnel collected and continue to collect field-level, GIS-based in situ information. Current efforts involve both developing and optimizing software tools for the necessary data processing. The software includes the Time Series Product Tool (TSPT), Leica's ERDAS Imagine, and Mississippi State University's Temporal Map Algebra computational tools.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly

    Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less

  1. Evidence-based pathology in its second decade: toward probabilistic cognitive computing.

    PubMed

    Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R

    2017-03-01

    Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Deriving the Cost of Software Maintenance for Software Intensive Systems

    DTIC Science & Technology

    2011-08-29

    more of software maintenance). Figure 4. SEER-SEM Maintenance Effort by Year Report (Reifer, Allen, Fersch, Hitchings, Judy , & Rosa, 2010...understand the linear relationship between two variables. The formula for the simple Pearson product-moment correlation is represented in Equation 5...standardization is required across the software maintenance community in order to ensure that the data being recorded can be employed beyond the agency or

  3. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2003-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated

  4. Eric Bonnema | NREL

    Science.gov Websites

    contributes to the research efforts for commercial buildings. This effort is dedicated to studying the , commercial sector whole-building energy simulation, scientific computing, and software configuration and

  5. DSS command software update

    NASA Technical Reports Server (NTRS)

    Stinnett, W. G.

    1980-01-01

    The modifications, additions, and testing results for a version of the Deep Space Station command software, generated for support of the Voyager Saturn encounter, are discussed. The software update requirements included efforts to: (1) recode portions of the software to permit recovery of approximately 2000 words of memory; (2) correct five Voyager Ground data System liens; (3) provide capability to automatically turn off the command processor assembly local printer during periods of low activity; and (4) correct anomalies existing in the software.

  6. On-Orbit Software Analysis

    NASA Technical Reports Server (NTRS)

    Moran, Susanne I.

    2004-01-01

    The On-Orbit Software Analysis Research Infusion Project was done by Intrinsyx Technologies Corporation (Intrinsyx) at the National Aeronautics and Space Administration (NASA) Ames Research Center (ARC). The Project was a joint collaborative effort between NASA Codes IC and SL, Kestrel Technology (Kestrel), and Intrinsyx. The primary objectives of the Project were: Discovery and verification of software program properties and dependencies, Detection and isolation of software defects across different versions of software, and Compilation of historical data and technical expertise for future applications

  7. U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)

    EPA Science Inventory

    The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...

  8. The Toxicity Estimation Software Tool (T.E.S.T.)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to estimate toxicological values for aquatic and mammalian species considering acute and chronic endpoints for screening purposes within TSCA and REACH programs.

  9. The Computational Infrastructure for Geodynamics: An Example of Software Curation and Citation in the Geodynamics Community

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2017-12-01

    Curation of software promotes discoverability and accessibility and works hand in hand with scholarly citation to ascribe value to, and provide recognition for software development. To meet this challenge, the Computational Infrastructure for Geodynamics (CIG) maintains a community repository built on custom and open tools to promote discovery, access, identification, credit, and provenance of research software for the geodynamics community. CIG (geodynamics.org) originated from recognition of the tremendous effort required to develop sound software and the need to reduce duplication of effort and to sustain community codes. CIG curates software across 6 domains and has developed and follows software best practices that include establishing test cases, documentation, and a citable publication for each software package. CIG software landing web pages provide access to current and past releases; many are also accessible through the CIG community repository on github. CIG has now developed abc - attribution builder for citation to enable software users to give credit to software developers. abc uses zenodo as an archive and as the mechanism to obtain a unique identifier (DOI) for scientific software. To assemble the metadata, we searched the software's documentation and research publications and then requested the primary developers to verify. In this process, we have learned that each development community approaches software attribution differently. The metadata gathered is based on guidelines established by groups such as FORCE11 and OntoSoft. The rollout of abc is gradual as developers are forward-looking, rarely willing to go back and archive prior releases in zenodo. Going forward all actively developed packages will utilize the zenodo and github integration to automate the archival process when a new release is issued. How to handle legacy software, multi-authored libraries, and assigning roles to software remain open issues.

  10. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  11. Training Software Developers and Designers to Conduct Usability Evaluations

    ERIC Educational Resources Information Center

    Skov, Mikael Brasholt; Stage, Jan

    2012-01-01

    Many efforts to improve the interplay between usability evaluation and software development rely either on better methods for conducting usability evaluations or on better formats for presenting evaluation results in ways that are useful for software designers and developers. Both of these approaches depend on a complete division of work between…

  12. Software measurement guidebook

    NASA Technical Reports Server (NTRS)

    Bassman, Mitchell J.; Mcgarry, Frank; Pajerski, Rose

    1994-01-01

    This software Measurement Guidebook presents information on the purpose and importance of measurement. It discusses the specific procedures and activities of a measurement program and the roles of the people involved. The guidebook also clarifies the roles that measurement can and must play in the goal of continual, sustained improvement for all software production and maintenance efforts.

  13. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  14. Software Cost-Estimation Model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  15. ToxPredictor: a Toxicity Estimation Software Tool

    EPA Science Inventory

    The Computational Toxicology Team within the National Risk Management Research Laboratory has developed a software tool that will allow the user to estimate the toxicity for a variety of endpoints (such as acute aquatic toxicity). The software tool is coded in Java and can be ac...

  16. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  17. Comment on "High resolution coherence analysis between planetary and climate oscillations"

    NASA Astrophysics Data System (ADS)

    Holm, Sverre

    2018-07-01

    The paper by Scafetta entitled "High resolution coherence analysis between planetary and climate oscillations", May 2016 claims coherence between planetary movements and the global temperature anomaly. The claim is based on data analysis using the canonical covariance analysis (CCA) estimator for the magnitude squared coherence (MSC). It assumes a model with a predetermined number of sinusoids for the climate data. The results are highly dependent on this prior assumption, and may therefore be criticized for being based on the opposite of a null hypothesis. More importantly, since values of key parameters in the CCA method are not given, some experiments have been performed using the software of the original authors of the CCA estimator. The purpose was to replicate the results of Scafetta using what was perceived to be the most probable parameter values. Despite best efforts, this was not possible.

  18. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  19. The analysis of the statistical and historical information gathered during the development of the Shuttle Orbiter Primary Flight Software

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Marchbanks, M. P., Jr.; Quick, M. J.

    1982-01-01

    The results of an effort to thoroughly and objectively analyze the statistical and historical information gathered during the development of the Shuttle Orbiter Primary Flight Software are given. The particular areas of interest include cost of the software, reliability of the software, requirements for the software and how the requirements changed during development of the system. Data related to the current version of the software system produced some interesting results. Suggestions are made for the saving of additional data which will allow additional investigation.

  20. SCA Waveform Development for Space Telemetry

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Kifle, Multi; Hall, C. Steve; Quinn, Todd M.

    2004-01-01

    The NASA Glenn Research Center is investigating and developing suitable reconfigurable radio architectures for future NASA missions. This effort is examining software-based open-architectures for space based transceivers, as well as common hardware platform architectures. The Joint Tactical Radio System's (JTRS) Software Communications Architecture (SCA) is a candidate for the software approach, but may need modifications or adaptations for use in space. An in-house SCA compliant waveform development focuses on increasing understanding of software defined radio architectures and more specifically the JTRS SCA. Space requirements put a premium on size, mass, and power. This waveform development effort is key to evaluating tradeoffs with the SCA for space applications. Existing NASA telemetry links, as well as Space Exploration Initiative scenarios, are the basis for defining the waveform requirements. Modeling and simulations are being developed to determine signal processing requirements associated with a waveform and a mission-specific computational burden. Implementation of the waveform on a laboratory software defined radio platform is proceeding in an iterative fashion. Parallel top-down and bottom-up design approaches are employed.

  1. Providing an empirical basis for optimizing the verification and testing phases of software development

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1992-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that the testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is as follows: (1) to measure the software system to be considered; and (2) to build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA/GSFC into two fault density classes: low and high. Also we evaluate the accuracy of the model and the insights it provides into the software process.

  2. Management Aspects of Software Maintenance.

    DTIC Science & Technology

    1984-09-01

    educated in * the complex nature of software maintenance to be able to properly evaluate and manage the software maintenance effort. In this...maintenance and improvement may be called "software evolution". The soft- ware manager must be Educated in the complex nature cf soft- Iware maintenance to be...complaint of error or request for modification is also studied in order to determine what action needs tc be taken. 2. Define Objective and Approach :

  3. CHIME: A Metadata-Based Distributed Software Development Environment

    DTIC Science & Technology

    2005-01-01

    structures by using typography , graphics , and animation. The Software Im- mersion in our conceptual model for CHIME can be seen as a form of Software...Even small- to medium-sized development efforts may involve hundreds of artifacts -- design documents, change requests, test cases and results, code...for managing and organizing information from all phases of the software lifecycle. CHIME is designed around an XML-based metadata architecture, in

  4. TEST (Toxicity Estimation Software Tool) Ver 4.1

    EPA Science Inventory

    The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to allow users to easily estimate toxicity and physical properties using a variety of QSAR methodologies. T.E.S.T allows a user to estimate toxicity without requiring any external programs. Users can input a chem...

  5. Development of Modern Performance Assessment Tools and Capabilities for Underground Disposal of Transuranic Waste at WIPP

    NASA Astrophysics Data System (ADS)

    Zeitler, T.; Kirchner, T. B.; Hammond, G. E.; Park, H.

    2014-12-01

    The Waste Isolation Pilot Plant (WIPP) has been developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. Containment of TRU waste at the WIPP is regulated by the U.S. Environmental Protection Agency (EPA). The DOE demonstrates compliance with the containment requirements by means of performance assessment (PA) calculations. WIPP PA calculations estimate the probability and consequence of potential radionuclide releases from the repository to the accessible environment for a regulatory period of 10,000 years after facility closure. The long-term performance of the repository is assessed using a suite of sophisticated computational codes. In a broad modernization effort, the DOE has overseen the transfer of these codes to modern hardware and software platforms. Additionally, there is a current effort to establish new performance assessment capabilities through the further development of the PFLOTRAN software, a state-of-the-art massively parallel subsurface flow and reactive transport code. Improvements to the current computational environment will result in greater detail in the final models due to the parallelization afforded by the modern code. Parallelization will allow for relatively faster calculations, as well as a move from a two-dimensional calculation grid to a three-dimensional grid. The result of the modernization effort will be a state-of-the-art subsurface flow and transport capability that will serve WIPP PA into the future. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.

  6. Quantitative software models for the estimation of cost, size, and defects

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Bright, L.; Decker, B.; Lum, K.; Mikulski, C.; Powell, J.

    2002-01-01

    The presentation will provide a brief overview of the SQI measurement program as well as describe each of these models and how they are currently being used in supporting JPL project, task and software managers to estimate and plan future software systems and subsystems.

  7. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    PubMed

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  8. Rational Design Methodology.

    DTIC Science & Technology

    1978-09-01

    This report describes an effort to specify a software design methodology applicable to the Air Force software environment . Available methodologies...of techniques for proof of correctness, design specification, and performance assessment of static designs. The rational methodology selected is a

  9. AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION

    NASA Technical Reports Server (NTRS)

    Yam, Y.

    1994-01-01

    The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is typically done in ground structural testing, AU-FREDI identifies only the key transfer function parameters and uncertainty bounds that are necessary for on-line design and tuning of robust controllers. AU-FREDI's system identification algorithms are independent of the JPL-LSCL environment, and can easily be extracted and modified for use with input/output data files. The basic approach of AU-FREDI's system identification algorithms is to non-parametrically identify the sampled data in the frequency domain using either stochastic or sine-dwell input, and then to obtain a parametric model of the transfer function by curve-fitting techniques. A cross-spectral analysis of the output error is used to determine the additive uncertainty in the estimated transfer function. The nominal transfer function estimate and the estimate of the associated additive uncertainty can be used for robust control analysis and design. AU-FREDI's I/O data transfer routines are tailored to the environment of the CALTECH/ JPL-LSCL which included a special operating system to interface with the testbed. Input commands for a particular experiment (wideband, narrowband, or sine-dwell) were computed on-line and then issued to respective actuators by the operating system. The operating system also took measurements through displacement sensors and passed them back to the software for storage and off-line processing. In order to make use of AU-FREDI's I/O data transfer routines, a user would need to provide an operating system capable of overseeing such functions between the software and the experimental setup at hand. The program documentation contains information designed to support users in either providing such an operating system or modifying the system identification algorithms for use with input/output data files. It provides a history of the theoretical, algorithmic and software development efforts including operating system requirements and listings of some of the various special purpose subroutines which were developed and optimized for Lahey FORTRAN compilers on IBM PC-AT computers before the subroutines were integrated into the system software. Potential purchasers are encouraged to purchase and review the documentation before purchasing the AU-FREDI software. AU-FREDI is distributed in DEC VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. AU-FREDI was developed in 1989 and is a copyrighted work with all copyright vested in NASA.

  10. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  11. Software handlers for process interfaces

    NASA Technical Reports Server (NTRS)

    Bercaw, R. W.

    1976-01-01

    Process interfaces are developed in an effort to reduce the time, effort, and money required to install computer systems. Probably the chief obstacle to the achievement of these goals lies in the problem of developing software handlers having the same degree of generality and modularity as the hardware. The problem of combining the advantages of modular instrumentation with those of modern multitask operating systems has not been completely solved, but there are a number of promising developments. The essential principles involved are considered.

  12. Java PathFinder: A Translator From Java to Promela

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    1999-01-01

    JAVA PATHFINDER, JPF, is a prototype translator from JAVA to PROMELA, the modeling language of the SPIN model checker. JPF is a product of a major effort by the Automated Software Engineering group at NASA Ames to make model checking technology part of the software process. Experience has shown that severe bugs can be found in final code using this technique, and that automated translation from a programming language to a modeling language like PROMELA can help reducing the effort required.

  13. Application of Real Options Theory to Software Engineering for Strategic Decision Making in Software Related Capital Investments

    DTIC Science & Technology

    2008-12-01

    between our current project and the historical projects. Therefore to refine the historical volatility estimate of the previously completed software... historical volatility estimates obtained in the form of beliefs and plausibility based on subjective probabilities that take into consideration unique

  14. Estimation of toxicity using a Java based software tool

    EPA Science Inventory

    A software tool has been developed that will allow a user to estimate the toxicity for a variety of endpoints (such as acute aquatic toxicity). The software tool is coded in Java and can be accessed using a web browser (or alternatively downloaded and ran as a stand alone applic...

  15. Computing volume potentials for noninvasive imaging of cardiac excitation.

    PubMed

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  16. Development and Demonstration of Material Properties Database and Software for the Simulation of Flow Properties in Cementitious Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, F.; Flach, G.

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to develop a new Cementitious Barriers Project (CBP) software module designated as FLOExcel. FLOExcel incorporates a uniform database to capture material characterization data and a GoldSim model to define flow properties for both intact and fractured cementitious materials and estimate Darcy velocity based on specified hydraulic head gradient and matric tension. The software module includes hydraulic parameters for intact cementitious and granular materials in the database and a standalone GoldSim framework to manipulate the data. The database will be updated with new data asmore » it comes available. The software module will later be integrated into the next release of the CBP Toolbox, Version 3.0. This report documents the development efforts for this software module. The FY14 activities described in this report focused on the following two items that form the FLOExcel package; 1) Development of a uniform database to capture CBP data for cementitious materials. In particular, the inclusion and use of hydraulic properties of the materials are emphasized; and 2) Development of algorithms and a GoldSim User Interface to calculate hydraulic flow properties of degraded and fractured cementitious materials. Hydraulic properties are required in a simulation of flow through cementitious materials such as Saltstone, waste tank fill grout, and concrete barriers. At SRNL these simulations have been performed using the PORFLOW code as part of Performance Assessments for salt waste disposal and waste tank closure.« less

  17. Software Toolbox Development for Rapid Earthquake Source Optimisation Combining InSAR Data and Seismic Waveforms

    NASA Astrophysics Data System (ADS)

    Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.

    2017-04-01

    We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  18. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  19. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  20. Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Ianeş, Emilia; Roşu, Doina

    2010-09-01

    This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.

  1. Parallelization of Rocket Engine Simulator Software (PRESS)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1997-01-01

    Parallelization of Rocket Engine System Software (PRESS) project is part of a collaborative effort with Southern University at Baton Rouge (SUBR), University of West Florida (UWF), and Jackson State University (JSU). The second-year funding, which supports two graduate students enrolled in our new Master's program in Computer Science at Hampton University and the principal investigator, have been obtained for the period from October 19, 1996 through October 18, 1997. The key part of the interim report was new directions for the second year funding. This came about from discussions during Rocket Engine Numeric Simulator (RENS) project meeting in Pensacola on January 17-18, 1997. At that time, a software agreement between Hampton University and NASA Lewis Research Center had already been concluded. That agreement concerns off-NASA-site experimentation with PUMPDES/TURBDES software. Before this agreement, during the first year of the project, another large-scale FORTRAN-based software, Two-Dimensional Kinetics (TDK), was being used for translation to an object-oriented language and parallelization experiments. However, that package proved to be too complex and lacking sufficient documentation for effective translation effort to the object-oriented C + + source code. The focus, this time with better documented and more manageable PUMPDES/TURBDES package, was still on translation to C + + with design improvements. At the RENS Meeting, however, the new impetus for the RENS projects in general, and PRESS in particular, has shifted in two important ways. One was closer alignment with the work on Numerical Propulsion System Simulator (NPSS) through cooperation and collaboration with LERC ACLU organization. The other was to see whether and how NASA's various rocket design software can be run over local and intra nets without any radical efforts for redesign and translation into object-oriented source code. There were also suggestions that the Fortran based code be encapsulated in C + + code thereby facilitating reuse without undue development effort. The details are covered in the aforementioned section of the interim report filed on April 28, 1997.

  2. Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.

    1997-01-01

    The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate navigation algorithms implemented on GEODE are also discussed. In addition, recommendations for generalization of GEAS functions and for new techniques to optimize the accuracy and control of the GPS autonomous onboard navigation are presented.

  3. Soil Carbon Data: long tail recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-07-25

    The software is intended to be part of an open source effort regarding soils data. The software provides customized data ingestion scripts for soil carbon related data sets and scripts for output databases that conform to common templates.

  4. Failure-Modes-And-Effects Analysis Of Software Logic

    NASA Technical Reports Server (NTRS)

    Garcia, Danny; Hartline, Thomas; Minor, Terry; Statum, David; Vice, David

    1996-01-01

    Rigorous analysis applied early in design effort. Method of identifying potential inadequacies and modes and effects of failures caused by inadequacies (failure-modes-and-effects analysis or "FMEA" for short) devised for application to software logic.

  5. Comparison of Aircraft Icing Growth Assessment Software

    NASA Technical Reports Server (NTRS)

    Wright, William; Potapczuk, Mark G.; Levinson, Laurie H.

    2011-01-01

    A research project is underway to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has been performed, including additional data taken to extend the database in the Super-cooled Large Drop (SLD) regime. The project shows the differences in ice shape between LEWICE 3.2.2, GlennICE, and experimental data. The project addresses the validation of the software against a recent set of ice-shape data in the SLD regime. This validation effort mirrors a similar effort undertaken for previous validations of LEWICE. Those reports quantified the ice accretion prediction capabilities of the LEWICE software. Several ice geometry features were proposed for comparing ice shapes in a quantitative manner. The resulting analysis showed that LEWICE compared well to the available experimental data.

  6. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  7. Achieving Better Buying Power for Mobile Open Architecture Software Systems Through Diverse Acquisition Scenarios

    DTIC Science & Technology

    2016-04-30

    software (OSS) and proprietary (CSS) software elements or remote services (Scacchi, 2002, 2010), eventually including recent efforts to support Web ...specific platforms, including those operating on secured Web /mobile devices.  Common Development Technology provides AC development tools and common...transition to OA systems and OSS software elements, specifically for Web and Mobile devices within the realm of C3CB. OA, Open APIs, OSS, and CSS OA

  8. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  9. An open source framework for tracking and state estimation ('Stone Soup')

    NASA Astrophysics Data System (ADS)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  10. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced modelsmore » to be used in estimation and inference.« less

  11. An Exploratory Study of Software Cost Estimating at the Electronic Systems Division.

    DTIC Science & Technology

    1976-07-01

    action’. to improve the software cost Sestimating proces., While thin research was limited to the M.nD onvironment, the same types of problema may exist...Methods in Social Science. Now York: Random House, 1969. 57. Smith, Ronald L. Structured Programming Series (Vol. XI) - Estimating Software Project

  12. A taxonomy and discussion of software attack technologies

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2005-03-01

    Software is a complex thing. It is not an engineering artifact that springs forth from a design by simply following software coding rules; creativity and the human element are at the heart of the process. Software development is part science, part art, and part craft. Design, architecture, and coding are equally important activities and in each of these activities, errors may be introduced that lead to security vulnerabilities. Therefore, inevitably, errors enter into the code. Some of these errors are discovered during testing; however, some are not. The best way to find security errors, whether they are introduced as part of the architecture development effort or coding effort, is to automate the security testing process to the maximum extent possible and add this class of tools to the tools available, which aids in the compilation process, testing, test analysis, and software distribution. Recent technological advances, improvements in computer-generated forces (CGFs), and results in research in information assurance and software protection indicate that we can build a semi-intelligent software security testing tool. However, before we can undertake the security testing automation effort, we must understand the scope of the required testing, the security failures that need to be uncovered during testing, and the characteristics of the failures. Therefore, we undertook the research reported in the paper, which is the development of a taxonomy and a discussion of software attacks generated from the point of view of the security tester with the goal of using the taxonomy to guide the development of the knowledge base for the automated security testing tool. The representation for attacks and threat cases yielded by this research captures the strategies, tactics, and other considerations that come into play during the planning and execution of attacks upon application software. The paper is organized as follows. Section one contains an introduction to our research and a discussion of the motivation for our work. Section two contains a presents our taxonomy of software attacks and a discussion of the strategies employed and general weaknesses exploited for each attack. Section three contains a summary and suggestions for further research.

  13. Data collection procedures for the Software Engineering Laboratory (SEL) database

    NASA Technical Reports Server (NTRS)

    Heller, Gerard; Valett, Jon; Wild, Mary

    1992-01-01

    This document is a guidebook to collecting software engineering data on software development and maintenance efforts, as practiced in the Software Engineering Laboratory (SEL). It supersedes the document entitled Data Collection Procedures for the Rehosted SEL Database, number SEL-87-008 in the SEL series, which was published in October 1987. It presents procedures to be followed on software development and maintenance projects in the Flight Dynamics Division (FDD) of Goddard Space Flight Center (GSFC) for collecting data in support of SEL software engineering research activities. These procedures include detailed instructions for the completion and submission of SEL data collection forms.

  14. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  15. Software Metrics

    DTIC Science & Technology

    1988-12-01

    software development scene is often charac- c. SPQR Model-Jones terized by: * schedule and cost estimates that are gross-d. COPMO-Thebaut ly inaccurate, SEI...time c. SPQR Model-Jones (in seconds) is simply derived from E by dividing T. Capers Jones has developed a software cost by the Stroud number, S...estimation model called the Software Produc- T=E/S tivity, Quality, and Reliability ( SPQR ) model. The basic approach is similar to that of Boehm’s The value

  16. Evidence of absence (v2.0) software user guide

    USGS Publications Warehouse

    Dalthorp, Daniel; Huso, Manuela; Dail, David

    2017-07-06

    Evidence of Absence software (EoA) is a user-friendly software application for estimating bird and bat fatalities at wind farms and for designing search protocols. The software is particularly useful in addressing whether the number of fatalities is below a given threshold and what search parameters are needed to give assurance that thresholds were not exceeded. The software also includes tools (1) for estimating carcass persistence distributions and searcher efficiency parameters ( and ) from field trials, (2) for projecting future mortality based on past monitoring data, and (3) for exploring the potential consequences of various choices in the design of long-term incidental take permits for protected species. The software was designed specifically for cases where tolerance for mortality is low and carcass counts are small or even 0, but the tools also may be used for mortality estimates when carcass counts are large.

  17. Changes and challenges in the Software Engineering Laboratory

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose

    1994-01-01

    Since 1976, the Software Engineering Laboratory (SEL) has been dedicated to understanding and improving the way in which one NASA organization, the Flight Dynamics Division (FDD), develops, maintains, and manages complex flight dynamics systems. The SEL is composed of three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation. During the past 18 years, the SEL's overall goal has remained the same: to improve the FDD's software products and processes in a measured manner. This requires that each development and maintenance effort be viewed, in part, as a SEL experiment which examines a specific technology or builds a model of interest for use on subsequent efforts. The SEL has undertaken many technology studies while developing operational support systems for numerous NASA spacecraft missions.

  18. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Piro, Markus H.A.

    Thermochimica is a software library that determines a unique combination of phases and their compositions at thermochemical equilibrium. Thermochimica can be used for stand-alone calculations or it can be directly coupled to other codes. This release of the software does not have a graphical user interface (GUI) and it can be executed from the command line or from an Application Programming Interface (API). Also, it is not intended for thermodynamic model development or for constructing phase diagrams. The main purpose of the software is to be directly coupled with a multi-physics code to provide material properties and boundary conditions formore » various physical phenomena. Significant research efforts have been dedicated to enhance computational performance through advanced algorithm development, such as improved estimation techniques and non-linear solvers. Various useful parameters can be provided as output from Thermochimica, such as: determination of which phases are stable at equilibrium, the mass of solution species and phases at equilibrium, mole fractions of solution phase constituents, thermochemical activities (which are related to partial pressures for gaseous species), chemical potentials of solution species and phases, and integral Gibbs energy (referenced relative to standard state). The overall goal is to provide an open source computational tool to enhance the predictive capability of multi-physics codes without significantly impeding computational performance.« less

  20. Advanced Concept Modeling

    NASA Technical Reports Server (NTRS)

    Chaput, Armand; Johns, Zachary; Hodges, Todd; Selfridge, Justin; Bevirt, Joeben; Ahuja, Vivek

    2015-01-01

    Advanced Concepts Modeling software validation, analysis, and design. This was a National Institute of Aerospace contract with a lot of pieces. Efforts ranged from software development and validation for structures and aerodynamics, through flight control development, and aeropropulsive analysis, to UAV piloting services.

  1. Computerizing the Accounting Curriculum.

    ERIC Educational Resources Information Center

    Nash, John F.; England, Thomas G.

    1986-01-01

    Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)

  2. Globus Quick Start Guide. Globus Software Version 1.1

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The Globus Project is a community effort, led by Argonne National Laboratory and the University of Southern California's Information Sciences Institute. Globus is developing the basic software infrastructure for computations that integrate geographically distributed computational and information resources.

  3. The 2005 Workbook: an improved tool for estimating HIV prevalence in countries with low level and concentrated epidemics.

    PubMed

    Lyerla, R; Gouws, E; García-Calleja, J M; Zaniewski, E

    2006-06-01

    This paper describes improvements and updates to an established approach to making epidemiological estimates of HIV prevalence in countries with low level and concentrated epidemics. The structure of the software used to make estimates is briefly described, with particular attention to changes and improvements. The approach focuses on identifying populations which, through their behaviour, are at high risk of infection with HIV or who are exposed through the risk behaviour of their sexual partners. Estimates of size and HIV prevalence of these populations allow the total number of HIV infected people in a country or region to be estimated. Major changes in the software focus on the move away from short term projections and towards developing an epidemiological curve that more accurately represents the change in prevalence of HIV over time. The software continues to provide an output file for use in the Spectrum software so as to estimate the demographic impact of HIV infection at country level.

  4. Classification software technique assessment

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Atkinson, R.; Dasarathy, B. V.; Lybanon, M.; Ramapryian, H. K.

    1976-01-01

    A catalog of software options is presented for the use of local user communities to obtain software for analyzing remotely sensed multispectral imagery. The resources required to utilize a particular software program are described. Descriptions of how a particular program analyzes data and the performance of that program for an application and data set provided by the user are shown. An effort is made to establish a statistical performance base for various software programs with regard to different data sets and analysis applications, to determine the status of the state-of-the-art.

  5. Gaining Control and Predictability of Software-Intensive Systems Development and Sustainment

    DTIC Science & Technology

    2015-02-04

    implementation of the baselines, audits , and technical reviews within an overarching systems engineering process (SEP; Defense Acquisition University...warfighters’ needs. This management and metrics effort supplements and supports the system’s technical development through the baselines, audits and...other areas that could be researched and added into the nine-tier model. Areas including software metrics, quality assurance , software-oriented

  6. Open Source Software in Teaching Physics: A Case Study on Vector Algebra and Visual Representations

    ERIC Educational Resources Information Center

    Cataloglu, Erdat

    2006-01-01

    This study aims to report the effort on teaching vector algebra using free open source software (FOSS). Recent studies showed that students have difficulties in learning basic physics concepts. Constructivist learning theories suggest the use of visual and hands-on activities in learning. We will report on the software used for this purpose. The…

  7. Using Digital Acoustic Recording Tags to Detect Marine Mammals on Navy Ranges and Study their Responses to Naval Sonar

    DTIC Science & Technology

    2011-02-01

    written in C and assembly languages. 2) executable code for the low-power wakeup controller in the tag. This software is responsible for the VHF...used in the tag software. The multi-rate processing in the new tag necessitated a more complex task- scheduling software architecture. The effort of

  8. Utilization of Intelligent Software Agent Features for Improving E-Learning Efforts: A Comprehensive Investigation

    ERIC Educational Resources Information Center

    Farzaneh, Mandana; Vanani, Iman Raeesi; Sohrabi, Babak

    2012-01-01

    E-learning is one of the most important learning approaches within which intelligent software agents can be efficiently used so as to automate and facilitate the process of learning. The aim of this paper is to illustrate a comprehensive categorization of intelligent software agent features, which is valuable for being deployed in the virtual…

  9. An Analysis of Botnet Vulnerabilities

    DTIC Science & Technology

    2007-06-01

    Definition Currently, the primary defense against botnets is prompt patching of vulnerable systems and antivirus software . Network monitoring can identify...IRCd software , none were identified during this effort. AFIT iv For my wife, for her caring and support throughout the course of this...are software agents designed to automatically perform tasks. Examples include web-spiders that catalog the Internet and bots found in popular online

  10. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    NASA Technical Reports Server (NTRS)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  11. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less

  12. Improving a data-acquisition software system with abstract data type components

    NASA Technical Reports Server (NTRS)

    Howard, S. D.

    1990-01-01

    Abstract data types and object-oriented design are active research areas in computer science and software engineering. Much of the interest is aimed at new software development. Abstract data type packages developed for a discontinued software project were used to improve a real-time data-acquisition system under maintenance. The result saved effort and contributed to a significant improvement in the performance, maintainability, and reliability of the Goldstone Solar System Radar Data Acquisition System.

  13. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Effective and Efficient Stochastic Global Optimization

    DTIC Science & Technology

    2012-02-01

    parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires

  14. A Decision Support System for Planning, Control and Auditing of DoD Software Cost Estimation.

    DTIC Science & Technology

    1986-03-01

    is frequently used in U. S. Air Force software cost estimates. Barry Boehm’s Constructive Cost Estimation Model (COCOMO) was recently selected for use...are considered basic to the proper development of software. Pressman , [Ref. 11], addresses these basic elements in a manner which attempts to integrate...H., Jr., and Carlson, Eric D., Building E fective Decision SUDDOrt Systems, Prentice-Hal, EnglewoodNJ, 1982 11. Pressman , Roger S., o A Practioner’s A

  15. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  16. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    NASA Astrophysics Data System (ADS)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  17. CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 6, June 2007

    DTIC Science & Technology

    2007-06-01

    California. He has co-authored the book Software Cost Estimation With COCOMO II with Barry Boehm and others. Clark helped define the COCOMO II model...Software Engineering at the University of Southern California. She worked with Barry Boehm and Chris Abts to develop and calibrate a cost-estimation...2003/02/ schorsch.html>. 2. See “Software Engineering, A Practitioners Approach” by Roger Pressman for a good description of coupling, cohesion

  18. Software Engineering Laboratory (SEL) cleanroom process model

    NASA Technical Reports Server (NTRS)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  19. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  20. Sharing Research Models: Using Software Engineering Practices for Facilitation

    PubMed Central

    Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.

    2011-01-01

    Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780

  1. Regular Topologies for Gigabit Wide-Area Networks: Congestion Avoidance Testbed Experiments. Volume 3

    NASA Technical Reports Server (NTRS)

    Denny, Barbara A.; McKenney, Paul E., Sr.; Lee, Danny

    1994-01-01

    This document is Volume 3 of the final technical report on the work performed by SRI International (SRI) on SRI Project 8600. The document includes source listings for all software developed by SRI under this effort. Since some of our work involved the use of ST-II and the Sun Microsystems, Inc. (Sun) High-Speed Serial Interface (HSI/S) driver, we have included some of the source developed by LBL and BBN as well. In most cases, our decision to include source developed by other contractors depended on whether it was necessary to modify the original code. If we have modified the software in any way, it is included in this document. In the case of the Traffic Generator (TG), however, we have included all the ST-II software, even though BBN performed the integration, because the ST-II software is part of the standard TG release. It is important to note that all the code developed by other contractors is in the public domain, so that all software developed under this effort can be re-created from the source included here.

  2. Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2016-12-01

    Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.

  3. The GPM Ground Validation Program: Pre to Post-Launch

    NASA Astrophysics Data System (ADS)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and highlight early evaluations of GPM satellite datasets.

  4. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  5. Space Station Freedom (SSF) Data Management System (DMS) performance model data base

    NASA Technical Reports Server (NTRS)

    Stovall, John R.

    1993-01-01

    The purpose of this document was originally to be a working document summarizing Space Station Freedom (SSF) Data Management System (DMS) hardware and software design, configuration, performance and estimated loading data from a myriad of source documents such that the parameters provided could be used to build a dynamic performance model of the DMS. The document is published at this time as a close-out of the DMS performance modeling effort resulting from the Clinton Administration mandated Space Station Redesign. The DMS as documented in this report is no longer a part of the redesigned Space Station. The performance modeling effort was a joint undertaking between the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) Flight Data Systems Division (FDSD) and the NASA Ames Research Center (ARC) Spacecraft Data Systems Research Branch. The scope of this document is limited to the DMS core network through the Man Tended Configuration (MTC) as it existed prior to the 1993 Clinton Administration mandated Space Station Redesign. Data is provided for the Standard Data Processors (SDP's), Multiplexer/Demultiplexers (MDM's) and Mass Storage Units (MSU's). Planned future releases would have added the additional hardware and software descriptions needed to describe the complete DMS. Performance and loading data through the Permanent Manned Configuration (PMC) was to have been included as it became available. No future releases of this document are presently planned pending completion of the present Space Station Redesign activities and task reassessment.

  6. Profile of software engineering within the National Aeronautics and Space Administration (NASA)

    NASA Technical Reports Server (NTRS)

    Sinclair, Craig C.; Jeletic, Kellyann F.

    1994-01-01

    This paper presents findings of baselining activities being performed to characterize software practices within the National Aeronautics and Space Administration. It describes how such baseline findings might be used to focus software process improvement activities. Finally, based on the findings to date, it presents specific recommendations in focusing future NASA software process improvement efforts. The findings presented in this paper are based on data gathered and analyzed to date. As such, the quantitative data presented in this paper are preliminary in nature.

  7. Open source molecular modeling.

    PubMed

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. ESPC Common Model Architecture Earth System Modeling Framework (ESMF) Software and Application Development

    DTIC Science & Technology

    2015-09-30

    originate from NASA , NOAA , and community modeling efforts, and support for creation of the suite was shared by sponsors from other agencies. ESPS...Framework (ESMF) Software and Application Development Cecelia Deluca NESII/CIRES/ NOAA Earth System Research Laboratory 325 Broadway Boulder, CO...Capability (NUOPC) was established between NOAA and Navy to develop a common software architecture for easy and efficient interoperability. The

  9. A Study of Performance and Effort Expectancy Factors among Generational and Gender Groups to Predict Enterprise Social Software Technology Adoption

    ERIC Educational Resources Information Center

    Patel, Sunil S.

    2013-01-01

    Social software technology has gained considerable popularity over the last decade and has had a great impact on hundreds of millions of people across the globe. Businesses have also expressed their interest in leveraging its use in business contexts. As a result, software vendors and business consumers have invested billions of dollars to use…

  10. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  11. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  12. Future Software Sizing Metrics and Estimation Challenges

    DTIC Science & Technology

    2011-07-01

    systems 4. Ultrahigh software system assurance 5. Legacy maintenance and Brownfield development 6. Agile and Lean/ Kanban development. This paper...refined as the design of the maintenance modifications or Brownfield re-engineering is determined. VII. 6. AGILE AND LEAN/ KANBAN DEVELOPMENT The...difficulties of software maintenance estimation can often be mitigated by using lean workflow management techniques such as Kanban [25]. In Kanban

  13. [Evaluation of Organ Dose Estimation from Indices of CT Dose Using Dose Index Registry].

    PubMed

    Iriuchijima, Akiko; Fukushima, Yasuhiro; Ogura, Akio

    Direct measurement of each patient organ dose from computed tomography (CT) is not possible. Most methods to estimate patient organ dose is using Monte Carlo simulation with dedicated software. However, dedicated software is too expensive for small scale hospitals. Not every hospital can estimate organ dose with dedicated software. The purpose of this study was to evaluate the simple method of organ dose estimation using some common indices of CT dose. The Monte Carlo simulation software Radimetrics (Bayer) was used for calculating organ dose and analysis relationship between indices of CT dose and organ dose. Multidetector CT scanners were compared with those from two manufactures (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). Using stored patient data from Radimetrics, the relationships between indices of CT dose and organ dose were indicated as each formula for estimating organ dose. The accuracy of estimation method of organ dose was compared with the results of Monte Carlo simulation using the Bland-Altman plots. In the results, SSDE was the feasible index for estimation organ dose in almost organs because it reflected each patient size. The differences of organ dose between estimation and simulation were within 23%. In conclusion, our estimation method of organ dose using indices of CT dose is convenient for clinical with accuracy.

  14. Autonomous Aerobraking Development Software: Phase 2 Summary

    NASA Technical Reports Server (NTRS)

    Cianciolo, Alicia D.; Maddock, Robert W.; Prince, Jill L.; Bowes, Angela; Powell, Richard W.; White, Joseph P.; Tolson, Robert; O'Shaughnessy, Daniel; Carrelli, David

    2013-01-01

    NASA has used aerobraking at Mars and Venus to reduce the fuel required to deliver a spacecraft into a desired orbit compared to an all-propulsive solution. Although aerobraking reduces the propellant, it does so at the expense of mission duration, large staff, and DSN coverage. These factors make aerobraking a significant cost element in the mission design. By moving on-board the current ground-based tasks of ephemeris determination, atmospheric density estimation, and maneuver sizing and execution, a flight project would realize significant cost savings. The NASA Engineering and Safety Center (NESC) sponsored Phase 1 and 2 of the Autonomous Aerobraking Development Software (AADS) study, which demonstrated the initial feasibility of moving these current ground-based functions to the spacecraft. This paper highlights key state-of-the-art advancements made in the Phase 2 effort to verify that the AADS algorithms are accurate, robust and ready to be considered for application on future missions that utilize aerobraking. The advancements discussed herein include both model updates and simulation and benchmark testing. Rigorous testing using observed flight atmospheres, operational environments and statistical analysis characterized the AADS operability in a perturbed environment.

  15. Extracting data from figures with software was faster, with higher interrater reliability than manual extraction.

    PubMed

    Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia

    2016-06-01

    To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Automating Structural Analysis of Spacecraft Vehicles

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2004-01-01

    A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  17. Empirical Estimates of 0Day Vulnerabilities in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles A. McQueen; Wayne F. Boyer; Sean M. McBride

    2009-01-01

    We define a 0Day vulnerability to be any vulnerability, in deployed software, which has been discovered by at least one person but has not yet been publicly announced or patched. These 0Day vulnerabilities are of particular interest when assessing the risk to well managed control systems which have already effectively mitigated the publicly known vulnerabilities. In these well managed systems the risk contribution from 0Days will have proportionally increased. To aid understanding of how great a risk 0Days may pose to control systems, an estimate of how many are in existence is needed. Consequently, using the 0Day definition given above,more » we developed and applied a method for estimating how many 0Day vulnerabilities are in existence on any given day. The estimate is made by: empirically characterizing the distribution of the lifespans, measured in days, of 0Day vulnerabilities; determining the number of vulnerabilities publicly announced each day; and applying a novel method for estimating the number of 0Day vulnerabilities in existence on any given day using the number of vulnerabilities publicly announced each day and the previously derived distribution of 0Day lifespans. The method was first applied to a general set of software applications by analyzing the 0Day lifespans of 491 software vulnerabilities and using the daily rate of vulnerability announcements in the National Vulnerability Database. This led to a conservative estimate that in the worst year there were, on average, 2500 0Day software related vulnerabilities in existence on any given day. Using a smaller but intriguing set of 15 0Day software vulnerability lifespans representing the actual time from discovery to public disclosure, we then made a more aggressive estimate. In this case, we estimated that in the worst year there were, on average, 4500 0Day software vulnerabilities in existence on any given day. We then proceeded to identify the subset of software applications likely to be used in some control systems, analyzed the associated subset of vulnerabilities, and characterized their lifespans. Using the previously developed method of analysis, we very conservatively estimated 250 control system related 0Day vulnerabilities in existence on any given day. While reasonable, this first order estimate for control systems is probably far more conservative than those made for general software systems since the estimate did not include vulnerabilities unique to control system specific components. These control system specific vulnerabilities were unable to be included in the estimate for a variety of reasons with the most problematic being that the public announcement of unique control system vulnerabilities is very sparse. Consequently, with the intent to improve the above 0Day estimate for control systems, we first identified the additional, unique to control systems, vulnerability estimation constraints and then investigated new mechanisms which may be useful for estimating the number of unique 0Day software vulnerabilities found in control system components. We proceeded to identify a number of new mechanisms and approaches for estimating and incorporating control system specific vulnerabilities into an improved 0Day estimation method. These new mechanisms and approaches appear promising and will be more rigorously evaluated during the course of the next year.« less

  18. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  19. CrossTalk: The Journal of Defense Software Engineering. Volume 18, Number 4

    DTIC Science & Technology

    2005-04-01

    older automated cost- estimating tools are no longer being actively marketed but are still in use such as CheckPoint, COCOMO, ESTIMACS, REVIC, and SPQR ...estimation tools: SPQR /20, Checkpoint, and Knowl- edgePlan. These software estimation tools pioneered the use of function point metrics for sizing and

  20. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts.

    PubMed

    Shah, Hemant; Allard, Raymond D; Enberg, Robert; Krishnan, Ganesh; Williams, Patricia; Nadkarni, Prakash M

    2012-03-09

    A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies.

  1. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts

    PubMed Central

    2012-01-01

    Background A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. Methods In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). Results The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. Conclusions When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies. PMID:22405400

  2. Securing Ground Data System Applications for Space Operations

    NASA Technical Reports Server (NTRS)

    Pajevski, Michael J.; Tso, Kam S.; Johnson, Bryan

    2014-01-01

    The increasing prevalence and sophistication of cyber attacks has prompted the Multimission Ground Systems and Services (MGSS) Program Office at Jet Propulsion Laboratory (JPL) to initiate the Common Access Manager (CAM) effort to protect software applications used in Ground Data Systems (GDSs) at JPL and other NASA Centers. The CAM software provides centralized services and software components used by GDS subsystems to meet access control requirements and ensure data integrity, confidentiality, and availability. In this paper we describe the CAM software; examples of its integration with spacecraft commanding software applications and an information management service; and measurements of its performance and reliability.

  3. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for manymore » CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less

  4. Data Centric Development Methodology

    ERIC Educational Resources Information Center

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  5. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  6. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  7. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  8. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  9. Space Life Support Engineering Program

    NASA Technical Reports Server (NTRS)

    Seagrave, Richard C.

    1993-01-01

    This report covers the second year of research relating to the development of closed-loop long-term life support systems. Emphasis was directed toward concentrating on the development of dynamic simulation techniques and software and on performing a thermodynamic systems analysis in an effort to begin optimizing the system needed for water purification. Four appendices are attached. The first covers the ASPEN modeling of the closed loop Environmental Control Life Support System (ECLSS) and its thermodynamic analysis. The second is a report on the dynamic model development for water regulation in humans. The third regards the development of an interactive computer-based model for determining exercise limitations. The fourth attachment is an estimate of the second law thermodynamic efficiency of the various units comprising an ECLSS.

  10. Open-source software: not quite endsville.

    PubMed

    Stahl, Matthew T

    2005-02-01

    Open-source software will never achieve ubiquity. There are environments in which it simply does not flourish. By its nature, open-source development requires free exchange of ideas, community involvement, and the efforts of talented and dedicated individuals. However, pressures can come from several sources that prevent this from happening. In addition, openness and complex licensing issues invite misuse and abuse. Care must be taken to avoid the pitfalls of open-source software.

  11. Survivability as a Tool for Evaluating Open Source Software

    DTIC Science & Technology

    2015-06-01

    the thesis limited the program development, so it is only able to process project issues (bugs or feature requests), which is an important metric for...Ideally, these insights may provide an analytic framework to generate guidance for decision makers that may support the inclusion of OSS to more...refine their efforts to build quality software and to strengthen their software development communities. 1.4 Research Questions This thesis addresses

  12. Development of a Communications Front End Processor (FEP) for the VAX-11/780 Using an LSI-11/23.

    DTIC Science & Technology

    1983-12-01

    9 Approach . . . . . . . . . . . . . . . . . . 11 Software Development Life Cycle . . . . . . . 11 Requirements Analysis...proven to be useful (25] during the Software Development Life Cycle of a project. Development tools and documentation aids used throughout this effort...include "Structure Charts" ( ref Appendix B ), a "Data Dictionary" ( ref Appendix C ),and Program Design Language CPDL). 1.5.1 Software Development- Life

  13. Evaluation and Validation (E&V) Team Public Report. Volume 5

    DTIC Science & Technology

    1990-10-31

    aspects, software engineering practices, etc. The E&V requirements which are developed will be used to guide the E&V technical effort. The currently...interoperability of Ada software engineering environment tools and data. The scope of the CAIS-A includes the functionality affecting transportability that is...requirement that they be CAIS conforming tools or data. That is, for example numerous CIVC data exist on special purpose software currently available

  14. Warfighting Concepts to Future Weapon System Designs (WARCON)

    DTIC Science & Technology

    2003-09-12

    34* Software design documents rise to litigation. "* A Material List "Cost information that may support, or may * Final Engineering Process Maps be...document may include design the system as derived from the engineering design, software development, SRD. MTS Technologies, Inc. 26 FOR OFFICIAL USE...document, early in the development phase. It is software engineers produce the vision of important to establish a standard, formal the design effort. As

  15. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Software for Photometric and Astrometric Reduction of Video Meteors

    NASA Astrophysics Data System (ADS)

    Atreya, Prakash; Christou, Apostolos

    2007-12-01

    SPARVM is a Software for Photometric and Astrometric Reduction of Video Meteors being developed at Armagh Observatory. It is written in Interactive Data Language (IDL) and is designed to run primarily under Linux platform. The basic features of the software will be derivation of light curves, estimation of angular velocity and radiant position for single station data. For double station data, calculation of 3D coordinates of meteors, velocity, brightness, and estimation of meteoroid's orbit including uncertainties. Currently, the software supports extraction of time and date from video frames, estimation of position of cameras (Azimuth, Altitude), finding stellar sources in video frames and transformation of coordinates from video, frames to Horizontal coordinate system (Azimuth, Altitude), and Equatorial coordinate system (RA, Dec).

  17. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1993-01-01

    The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.

  18. Data analysis and software support for the Earth radiation budget experiment

    NASA Technical Reports Server (NTRS)

    Edmonds, W.; Natarajan, S.

    1987-01-01

    Computer programming and data analysis efforts were performed in support of the Earth Radiation Budget Experiment (ERBE) at NASA/Langley. A brief description of the ERBE followed by sections describing software development and data analysis for both prelaunch and postlaunch instrument data are presented.

  19. The IEEE Software Engineering Standards Process

    PubMed Central

    Buckley, Fletcher J.

    1984-01-01

    Software Engineering has emerged as a field in recent years, and those involved increasingly recognize the need for standards. As a result, members of the Institute of Electrical and Electronics Engineers (IEEE) formed a subcommittee to develop these standards. This paper discusses the ongoing standards development, and associated efforts.

  20. Using Combined SFTA and SFMECA Techniques for Space Critical Software

    NASA Astrophysics Data System (ADS)

    Nicodemos, F. G.; Lahoz, C. H. N.; Abdala, M. A. D.; Saotome, O.

    2012-01-01

    This work addresses the combined Software Fault Tree Analysis (SFTA) and Software Failure Modes, Effects and Criticality Analysis (SFMECA) techniques applied to space critical software of satellite launch vehicles. The combined approach is under research as part of the Verification and Validation (V&V) efforts to increase software dependability and as future application in other projects under development at Instituto de Aeronáutica e Espaço (IAE). The applicability of such approach was conducted on system software specification and applied to a case study based on the Brazilian Satellite Launcher (VLS). The main goal is to identify possible failure causes and obtain compensating provisions that lead to inclusion of new functional and non-functional system software requirements.

  1. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  2. A measurement system for large, complex software programs

    NASA Technical Reports Server (NTRS)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  3. Results of solar electric thrust vector control system design, development and tests

    NASA Technical Reports Server (NTRS)

    Fleischer, G. E.

    1973-01-01

    Efforts to develop and test a thrust vector control system TVCS for a solar-energy-powered ion engine array are described. The results of solar electric propulsion system technology (SEPST) III real-time tests of present versions of TVCS hardware in combination with computer-simulated attitude dynamics of a solar electric multi-mission spacecraft (SEMMS) Phase A-type spacecraft configuration are summarized. Work on an improved solar electric TVCS, based on the use of a state estimator, is described. SEPST III tests of TVCS hardware have generally proved successful and dynamic response of the system is close to predictions. It appears that, if TVCS electronic hardware can be effectively replaced by control computer software, a significant advantage in control capability and flexibility can be gained in future developmental testing, with practical implications for flight systems as well. Finally, it is concluded from computer simulations that TVCS stabilization using rate estimation promises a substantial performance improvement over the present design.

  4. Use of Massive Parallel Computing Libraries in the Context of Global Gravity Field Determination from Satellite Data

    NASA Astrophysics Data System (ADS)

    Brockmann, J. M.; Schuh, W.-D.

    2011-07-01

    The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.

  5. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  6. Grasping objects autonomously in simulated KC-135 zero-g

    NASA Technical Reports Server (NTRS)

    Norsworthy, Robert S.

    1994-01-01

    The KC-135 aircraft was chosen for simulated zero gravity testing of the Extravehicular Activity Helper/retriever (EVAHR). A software simulation of the EVAHR hardware, KC-135 flight dynamics, collision detection and grasp inpact dynamics has been developed to integrate and test the EVAHR software prior to flight testing on the KC-135. The EVAHR software will perform target pose estimation, tracking, and motion estimation for rigid, freely rotating, polyhedral objects. Manipulator grasp planning and trajectory control software has also been developed to grasp targets while avoiding collisions.

  7. Space shuttle onboard navigation console expert/trainer system

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bochsler, Dan

    1987-01-01

    A software system for use in enhancing operational performance as well as training ground controllers in monitoring onboard Space Shuttle navigation sensors is described. The Onboard Navigation (ONAV) development reflects a trend toward following a structured and methodical approach to development. The ONAV system must deal with integrated conventional and expert system software, complex interfaces, and implementation limitations due to the target operational environment. An overview of the onboard navigation sensor monitoring function is presented, along with a description of guidelines driving the development effort, requirements that the system must meet, current progress, and future efforts.

  8. Assessment of Flood Disaster Impacts in Cambodia: Implications for Rapid Disaster Response

    NASA Astrophysics Data System (ADS)

    Ahamed, Aakash; Bolten, John; Doyle, Colin

    2016-04-01

    Disaster monitoring systems can provide near real time estimates of population and infrastructure affected by sudden onset natural hazards. This information is useful to decision makers allocating lifesaving resources following disaster events. Floods are the world's most common and devastating disasters (UN, 2004; Doocy et al., 2013), and are particularly frequent and severe in the developing countries of Southeast Asia (Long and Trong, 2001; Jonkman, 2005; Kahn, 2005; Stromberg, 2007; Kirsch et al., 2012). Climate change, a strong regional monsoon, and widespread hydropower construction contribute to a complex and unpredictable regional hydrodynamic regime. As such, there is a critical need for novel techniques to assess flood impacts to population and infrastructure with haste during and following flood events in order to enable governments and agencies to optimize response efforts following disasters. Here, we build on methods to determine regional flood extent in near real time and develop systems that automatically quantify the socioeconomic impacts of flooding in Cambodia. Software developed on cloud based, distributed processing Geographic Information Systems (GIS) is used to demonstrate spatial and numerical estimates of population, households, roadways, schools, hospitals, airports, agriculture and fish catch affected by severe monsoon flooding occurring in the Cambodian portion of Lower Mekong River Basin in 2011. Results show modest agreement with government and agency estimates. Maps and statistics generated from the system are intended to complement on the ground efforts and bridge information gaps to decision makers. The system is open source, flexible, and can be applied to other disasters (e.g. earthquakes, droughts, landslides) in various geographic regions.

  9. Patient characteristics associated with differences in radiation exposure from pediatric abdomen-pelvis CT scans: a quantile regression analysis.

    PubMed

    Cooper, Jennifer N; Lodwick, Daniel L; Adler, Brent; Lee, Choonsik; Minneci, Peter C; Deans, Katherine J

    2017-06-01

    Computed tomography (CT) is a widely used diagnostic tool in pediatric medicine. However, due to concerns regarding radiation exposure, it is essential to identify patient characteristics associated with higher radiation burden from CT imaging, in order to more effectively target efforts towards dose reduction. Our objective was to identify the effects of various demographic and clinical patient characteristics on radiation exposure from single abdomen/pelvis CT scans in children. CT scans performed at our institution between January 2013 and August 2015 in patients under 16 years of age were processed using a software tool that estimates patient-specific organ and effective doses and merges these estimates with data from the electronic health record and billing record. Quantile regression models at the 50th, 75th, and 90th percentiles were used to estimate the effects of patients' demographic and clinical characteristics on effective dose. 2390 abdomen/pelvis CT scans (median effective dose 1.52mSv) were included. Of all characteristics examined, only older age, female gender, higher BMI, and whether the scan was a multiphase exam or an exam that required repeating for movement were significant predictors of higher effective dose at each quantile examined (all p<0.05). The effects of obesity and multiphase or repeat scanning on effective dose were magnified in higher dose scans. Older age, female gender, obesity, and multiphase or repeat scanning are all associated with increased effective dose from abdomen/pelvis CT. Targeted efforts to reduce dose from abdominal CT in these groups should be undertaken. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Visual and computer software-aided estimates of Dupuytren's contractures: correlation with clinical goniometric measurements.

    PubMed

    Smith, R P; Dias, J J; Ullah, A; Bhowal, B

    2009-05-01

    Corrective surgery for Dupuytren's disease represents a significant proportion of a hand surgeon's workload. The decision to go ahead with surgery and the success of surgery requires measuring the degree of contracture of the diseased finger(s). This is performed in clinic with a goniometer, pre- and postoperatively. Monitoring the recurrence of the contracture can inform on surgical outcome, research and audit. We compared visual and computer software-aided estimation of Dupuytren's contractures to clinical goniometric measurements in 60 patients with Dupuytren's disease. Patients' hands were digitally photographed. There were 76 contracted finger joints--70 proximal interphalangeal joints and six distal interphalangeal joints. The degrees of contracture of these images were visually assessed by six orthopaedic staff of differing seniority and re-assessed with computer software. Across assessors, the Pearson correlation between the goniometric measurements and the visual estimations was 0.83 and this significantly improved to 0.88 with computer software. Reliability with intra-class correlations achieved 0.78 and 0.92 for the visual and computer-aided estimations, respectively, and with test-retest analysis, 0.92 for visual estimation and 0.95 for computer-aided measurements. Visual estimations of Dupuytren's contractures correlate well with actual clinical goniometric measurements and improve further if measured with computer software. Digital images permit monitoring of contracture after surgery and may facilitate research into disease progression and auditing of surgical technique.

  11. SDDL- SOFTWARE DESIGN AND DOCUMENTATION LANGUAGE

    NASA Technical Reports Server (NTRS)

    Kleine, H.

    1994-01-01

    Effective, efficient communication is an essential element of the software development process. The Software Design and Documentation Language (SDDL) provides an effective communication medium to support the design and documentation of complex software applications. SDDL supports communication between all the members of a software design team and provides for the production of informative documentation on the design effort. Even when an entire development task is performed by a single individual, it is important to explicitly express and document communication between the various aspects of the design effort including concept development, program specification, program development, and program maintenance. SDDL ensures that accurate documentation will be available throughout the entire software life cycle. SDDL offers an extremely valuable capability for the design and documentation of complex programming efforts ranging from scientific and engineering applications to data management and business sytems. Throughout the development of a software design, the SDDL generated Software Design Document always represents the definitive word on the current status of the ongoing, dynamic design development process. The document is easily updated and readily accessible in a familiar, informative form to all members of the development team. This makes the Software Design Document an effective instrument for reconciling misunderstandings and disagreements in the development of design specifications, engineering support concepts, and the software design itself. Using the SDDL generated document to analyze the design makes it possible to eliminate many errors that might not be detected until coding and testing is attempted. As a project management aid, the Software Design Document is useful for monitoring progress and for recording task responsibilities. SDDL is a combination of language, processor, and methodology. The SDDL syntax consists of keywords to invoke design structures and a collection of directives which control processor actions. The designer has complete control over the choice of keywords, commanding the capabilities of the processor in a way which is best suited to communicating the intent of the design. The SDDL processor translates the designer's creative thinking into an effective document for communication. The processor performs as many automatic functions as possible, thereby freeing the designer's energy for the creative effort. Document formatting includes graphical highlighting of structure logic, accentuation of structure escapes and module invocations, logic error detection, and special handling of title pages and text segments. The SDDL generated document contains software design summary information including module invocation hierarchy, module cross reference, and cross reference tables of user selected words or phrases appearing in the document. The basic forms of the methodology are module and block structures and the module invocation statement. A design is stated in terms of modules that represent problem abstractions which are complete and independent enough to be treated as separate problem entities. Blocks are lower-level structures used to build the modules. Both kinds of structures may have an initiator part, a terminator part, an escape segment, or a substructure. The SDDL processor is written in PASCAL for batch execution on a DEC VAX series computer under VMS. SDDL was developed in 1981 and last updated in 1984.

  12. When is best-worst best? A comparison of best-worst scaling, numeric estimation, and rating scales for collection of semantic norms.

    PubMed

    Hollis, Geoff; Westbury, Chris

    2018-02-01

    Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).

  13. Profile of NASA software engineering: Lessons learned from building the baseline

    NASA Technical Reports Server (NTRS)

    Hall, Dana; Mcgarry, Frank

    1993-01-01

    It is critically important in any improvement activity to first understand the organization's current status, strengths, and weaknesses and, only after that understanding is achieved, examine and implement promising improvements. This fundamental rule is certainly true for an organization seeking to further its software viability and effectiveness. This paper addresses the role of the organizational process baseline in a software improvement effort and the lessons we learned assembling such an understanding for NASA overall and for the NASA Goddard Space Flight Center in particular. We discuss important, core data that must be captured and contrast that with our experience in actually finding such information. Our baselining efforts have evolved into a set of data gathering, analysis, and crosschecking techniques and information presentation formats that may prove useful to others seeking to establish similar baselines for their organization.

  14. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    PubMed

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  15. Concept Development for Software Health Management

    NASA Technical Reports Server (NTRS)

    Riecks, Jung; Storm, Walter; Hollingsworth, Mark

    2011-01-01

    This report documents the work performed by Lockheed Martin Aeronautics (LM Aero) under NASA contract NNL06AA08B, delivery order NNL07AB06T. The Concept Development for Software Health Management (CDSHM) program was a NASA funded effort sponsored by the Integrated Vehicle Health Management Project, one of the four pillars of the NASA Aviation Safety Program. The CD-SHM program focused on defining a structured approach to software health management (SHM) through the development of a comprehensive failure taxonomy that is used to characterize the fundamental failure modes of safety-critical software.

  16. Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective

    NASA Technical Reports Server (NTRS)

    Reinholtz, Kirk

    2008-01-01

    This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.

  17. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  18. SABrE User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less

  19. SABrE User`s Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.A.

    In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less

  20. Net-VISA used as a complement to standard software at the CTBTO: initial operational experience with next-generation software.

    NASA Astrophysics Data System (ADS)

    Le Bras, R. J.; Arora, N. S.; Kushida, N.; Kebede, F.; Feitio, P.; Tomuta, E.

    2017-12-01

    The International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has reached out to the broader scientific community through a series of conferences, the later one of which took place in June 2017 in Vienna, Austria. Stemming out of this outreach effort, after the inception of research and development efforts in 2009, the NET-VISA software, following a Bayesian modelling approach, has been elaborated to improve on the key step of automatic association of joint seismic, hydro-acoustic, and infrasound detections. When compared with the current operational system, it has been consistently shown on off-line tests to improve the overlap with the analyst-reviewed Reviewed Event Bulletin (REB) by ten percent for an average of 85% overlap, while the inconsistency rate is essentially the same at about 50%. Testing by analysts in realistic conditions on a few days of data has also demonstrated the software performance in finding additional events which qualify for publication in the REB. Starting in August 2017, the automatic events produced by the software will be reviewed by analysts at the CTBTO, and we report on the initial evaluation of this introduction into operations.

  1. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinda, Peter August

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems softwaremore » for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3), although a prototype connecting Palacios with the GEM5 architectural simulator was demonstrated, our conclusion was that such a platform was less useful for design space exploration than anticipated due to inherent complexity of the connection between the instruction set architecture level and the microarchitectural level. For effort (4), we found that a code injection approach proved to be more fruitful. The results of our efforts are publicly available in the open source Palacios codebase and published papers, all of which are available from the project web site, v3vee.org. Palacios is currently one of the two codebases (the other being Sandia’s Kitten lightweight kernel) that underlies the node operating system for the DOE Hobbes Project, one of two projects tasked with building a systems software prototype for the national exascale computing effort.« less

  2. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.

  3. Software Tools for Formal Specification and Verification of Distributed Real-Time Systems.

    DTIC Science & Technology

    1997-09-30

    set of software tools for specification and verification of distributed real time systems using formal methods. The task of this SBIR Phase II effort...to be used by designers of real - time systems for early detection of errors. The mathematical complexity of formal specification and verification has

  4. Measuring the Impact of Agile Coaching on Students' Performance

    ERIC Educational Resources Information Center

    Rodríguez, Guillermo; Soria, Álvaro; Campo, Marcelo

    2016-01-01

    Nowadays, considerable attention is paid to agile methods as a means to improve management of software development processes. The widespread use of such methods in professional contexts has encouraged their integration into software engineering training and undergraduate courses. Although several research efforts have focused on teaching Scrum…

  5. Early-Stage Software Design for Usability

    ERIC Educational Resources Information Center

    Golden, Elspeth

    2010-01-01

    In spite of the goodwill and best efforts of software engineers and usability professionals, systems continue to be built and released with glaring usability flaws that are costly and difficult to fix after the system has been built. Although user interface (UI) designers, be they usability or design experts, communicate usability requirements to…

  6. Pilot-in-the-Loop CFD Method Development

    DTIC Science & Technology

    2014-06-16

    CFD analysis. Coupled simulations will be run at PSU on the COCOA -4 cluster, a high performance computing cluster. The CRUNCH CFD software has...been installed on the COCOA -4 servers and initial software tests are being conducted. Initial efforts will use the Generic Frigate Shape SFS-2 to

  7. Advanced technologies for Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Dalton, John T.; Hughes, Peter M.

    1991-01-01

    Advance technologies for Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: technology needs; current technology efforts at GSFC (human-machine interface development, object oriented software development, expert systems, knowledge-based software engineering environments, and high performance VLSI telemetry systems); and test beds.

  8. The Stabilization, Exploration, and Expression of Computer Game History

    ERIC Educational Resources Information Center

    Kaltman, Eric

    2017-01-01

    Computer games are now a significant cultural phenomenon, and a significant artistic output of humanity. However, little effort and attention have been paid to how the medium of games and interactive software developed, and even less to the historical storage of software development documentation. This thesis borrows methodologies and practices…

  9. Learning Teamwork Skills in University Programming Courses

    ERIC Educational Resources Information Center

    Sancho-Thomas, Pilar; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar

    2009-01-01

    University courses about computer programming usually seek to provide students not only with technical knowledge, but also with the skills required to work in real-life software projects. Nowadays, the development of software applications requires the coordinated efforts of the members of one or more teams. Therefore, it is important for software…

  10. Multibiodose radiation emergency triage categorization software.

    PubMed

    Ainsbury, Elizabeth A; Barnard, Stephen; Barrios, Lleonard; Fattibene, Paola; de Gelder, Virginie; Gregoire, Eric; Lindholm, Carita; Lloyd, David; Nergaard, Inger; Rothkamm, Kai; Romm, Horst; Scherthan, Harry; Thierens, Hubert; Vandevoorde, Charlot; Woda, Clemens; Wojcik, Andrzej

    2014-07-01

    In this note, the authors describe the MULTIBIODOSE software, which has been created as part of the MULTIBIODOSE project. The software enables doses estimated by networks of laboratories, using up to five retrospective (biological and physical) assays, to be combined to give a single estimate of triage category for each individual potentially exposed to ionizing radiation in a large scale radiation accident or incident. The MULTIBIODOSE software has been created in Java. The usage of the software is based on the MULTIBIODOSE Guidance: the program creates a link to a single SQLite database for each incident, and the database is administered by the lead laboratory. The software has been tested with Java runtime environment 6 and 7 on a number of different Windows, Mac, and Linux systems, using data from a recent intercomparison exercise. The Java program MULTIBIODOSE_1.0.jar is freely available to download from http://www.multibiodose.eu/software or by contacting the software administrator: MULTIBIODOSE-software@gmx.com.

  11. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  12. The predictive power of SIMION/SDS simulation software for modeling ion mobility spectrometry instruments

    NASA Astrophysics Data System (ADS)

    Lai, Hanh; McJunkin, Timothy R.; Miller, Carla J.; Scott, Jill R.; Almirall, José R.

    2008-09-01

    The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOSFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene, 2,7-dinitrofluorene, and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: (1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctly predict the ion drift times; (2) a drift gas composition study evaluates the accuracy in predicting the resolution; (3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.

  13. The Predictive Power of SIMION/SDS Simulation Software for Modeling Ion Mobility Spectrometry Instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanh Lai; Timothy R. McJunkin; Carla J. Miller

    2008-09-01

    The combined use of SIMION 7.0 and the statistical diffusion simulation (SDS) user program in conjunction with SolidWorks® with COSMSOFloWorks® fluid dynamics software to model a complete, commercial ion mobility spectrometer (IMS) was demonstrated for the first time and compared to experimental results for tests using compounds of immediate interest in the security industry (e.g., 2,4,6-trinitrotoluene and cocaine). The effort of this research was to evaluate the predictive power of SIMION/SDS for application to IMS instruments. The simulation was evaluated against experimental results in three studies: 1) a drift:carrier gas flow rates study assesses the ability of SIMION/SDS to correctlymore » predict the ion drift times; 2) a drift gas composition study evaluates the accuracy in predicting the resolution; and 3) a gate width study compares the simulated peak shape and peak intensity with the experimental values. SIMION/SDS successfully predicted the correct drift time, intensity, and resolution trends for the operating parameters studied. Despite the need for estimations and assumptions in the construction of the simulated instrument, SIMION/SDS was able to predict the resolution between two ion species in air within 3% accuracy. The preliminary success of IMS simulations using SIMION/SDS software holds great promise for the design of future instruments with enhanced performance.« less

  14. Sexual behavior and job stress in software professionals, Bengaluru - India

    PubMed Central

    Babu, Giridhara R.; Mahapatra, Tanmay; Mahapatra, Sanchita; Detels, Roger

    2013-01-01

    Background: Sexually transmitted diseases are now gradually affecting the general population groups increasingly. Our earlier observations from qualitative research called for an effort to understand the sexual exposure, activity and behavior of the workers in these software professionals in Bengaluru, India. Aim: The current study is explored to understand the association of the sexual behaviors with Job. Materials and Methods: The study design employed was a cross-sectional study using a mixed sampling method. A total of 1071 subjects from software sector in Bengaluru, the capital city of Karnataka completed the self-administered questionnaire. The source population comprised all information technology/information technology enabled services (IT/ITES) professionals aged 20-59 years working in “technical functions” in 21 selected worksites (units) of the software industry. The exposure of interest was job stressors and the outcome measures were sexual behaviors in the form of having multiple sexual partners, paid sex in last 3 months and frequency of intercourse with irregular sexual partners and condom use with regular partners during last sexual act. Results: Among the study population, 74.3% reported not using a condom during their last vaginal intercourse with their regular partner. Regression estimates indicated that workers with high physical stressors had 6 times odds of having paid for sex in last 3 months and those with a moderate level of income related stress had 2.4 times likelihood of not using a condom during the last sexual intercourse with their regular partner. Conclusion: There is scope for starting prevention programs among young professionals in the IT/ITES sector to mitigate their possible risk behaviors. PMID:24421592

  15. Information Technology. DOD Needs to Strengthen Management of Its Statutorily Mandated Software and System Process Improvement Efforts

    DTIC Science & Technology

    2009-09-01

    NII)/CIO Assistant Secretary of Defense for Networks and Information Integration/Chief Information Officer CMMI Capability Maturity Model...a Web-based portal to share knowledge about software process-related methodologies, such as the SEI’s Capability Maturity Model Integration ( CMMI ...19 SEI’s IDEALSM model, and Lean Six Sigma.20 For example, the portal features content areas such as software acquisition management, the SEI CMMI

  16. Maintenance Metrics for Jovial (J73) Software

    DTIC Science & Technology

    1988-12-01

    pacing technology in advanced fighters, just as it has in most other weapon systems and information systems" ( Canan , 1986:49). Another reason for...the magnitude of the software inside an aircraft may represent only a fraction of that aircraft’s total software requirement." ( Canan , 1986:49) One more...art than a science" marks program development as a largely labor-intensive, human endeavor ( Canan , 1986:50). Individual effort and creativity therefore

  17. Penn State University ground software support for X-ray missions.

    NASA Astrophysics Data System (ADS)

    Townsley, L. K.; Nousek, J. A.; Corbet, R. H. D.

    1995-03-01

    The X-ray group at Penn State is charged with two software development efforts in support of X-ray satellite missions. As part of the ACIS instrument team for AXAF, the authors are developing part of the ground software to support the instrument's calibration. They are also designing a translation program for Ginga data, to change it from the non-standard FRF format, which closely parallels the original telemetry format, to FITS.

  18. CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010

    DTIC Science & Technology

    2010-11-01

    Model of archi- tectural design. It guides developers to apply effort to their software architecture commensurate with the risks faced by...Driven Model is the promotion of risk to prominence. It is possible to apply the Risk-Driven Model to essentially any software development process...succeed without any planned architecture work, while many high-risk projects would fail without it . The Risk-Driven Model walks a middle path

  19. Technology Transfer Challenges for High-Assurance Software Engineering Tools

    NASA Technical Reports Server (NTRS)

    Koga, Dennis (Technical Monitor); Penix, John; Markosian, Lawrence Z.

    2003-01-01

    In this paper, we describe our experience with the challenges thar we are currently facing in our effort to develop advanced software verification and validation tools. We categorize these challenges into several areas: cost benefits modeling, tool usability, customer application domain, and organizational issues. We provide examples of challenges in each area and identrfj, open research issues in areas which limit our ability to transfer high-assurance software engineering tools into practice.

  20. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  1. IDA Cost Research Symposium Held 25 May 1995.

    DTIC Science & Technology

    1995-08-01

    Excel Spreadsheet Publications: MCR Report TR-9507/01 Category: II.B Keywords: Government, Estimating, Missiles, Analysis, Production, Data...originally developed by Martin Marietta as part of SASET software estimating model. To be implemented as part of SoftEST Software Estimating Tool...following documents to report the results of Its work. Reports Reports are the most authoritative and most carefully considered products IDA

  2. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    ERIC Educational Resources Information Center

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  3. Cleanroom certification model

    NASA Technical Reports Server (NTRS)

    Currit, P. A.

    1983-01-01

    The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.

  4. ACUTE TO CHRONIC ESTIMATION SOFTWARE FOR WINDOWS

    EPA Science Inventory

    Chronic No-Observed Effect Concentrations (NOEC) are commonly determined by either using acute-to-chronic ratios or by performing an ANOVA on chronic test data; both require lengthy and expensive chronic test results. Acute-to-Chronic Estimation (ACE) software was developed to p...

  5. Overview of T.E.S.T. (Toxicity Estimation Software Tool)

    EPA Science Inventory

    This talk provides an overview of T.E.S.T. (Toxicity Estimation Software Tool). T.E.S.T. predicts toxicity values and physical properties using a variety of different QSAR (quantitative structure activity relationship) approaches including hierarchical clustering, group contribut...

  6. As-built design specification for proportion estimate software subsystem

    NASA Technical Reports Server (NTRS)

    Obrien, S. (Principal Investigator)

    1980-01-01

    The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.

  7. Enhanced chemical weapon warning via sensor fusion

    NASA Astrophysics Data System (ADS)

    Flaherty, Michael; Pritchett, Daniel; Cothren, Brian; Schwaiger, James

    2011-05-01

    Torch Technologies Inc., is actively involved in chemical sensor networking and data fusion via multi-year efforts with Dugway Proving Ground (DPG) and the Defense Threat Reduction Agency (DTRA). The objective of these efforts is to develop innovative concepts and advanced algorithms that enhance our national Chemical Warfare (CW) test and warning capabilities via the fusion of traditional and non-traditional CW sensor data. Under Phase I, II, and III Small Business Innovative Research (SBIR) contracts with DPG, Torch developed the Advanced Chemical Release Evaluation System (ACRES) software to support non real-time CW sensor data fusion. Under Phase I and II SBIRs with DTRA in conjunction with the Edgewood Chemical Biological Center (ECBC), Torch is using the DPG ACRES CW sensor data fuser as a framework from which to develop the Cloud state Estimation in a Networked Sensor Environment (CENSE) data fusion system. Torch is currently developing CENSE to implement and test innovative real-time sensor network based data fusion concepts using CW and non-CW ancillary sensor data to improve CW warning and detection in tactical scenarios.

  8. Collected Software Engineering Papers, Volume 10

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document is a collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) from Oct. 1991 - Nov. 1992. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. Additional information about the SEL and its research efforts may be obtained from the sources listed in the bibliography at the end of this document. For the convenience of this presentation, the 11 papers contained here are grouped into 5 major sections: (1) the Software Engineering Laboratory; (2) software tools studies; (3) software models studies; (4) software measurement studies; and (5) Ada technology studies.

  9. Measuring Sea-Ice Motion in the Arctic with Real Time Photogrammetry

    NASA Astrophysics Data System (ADS)

    Brozena, J. M.; Hagen, R. A.; Peters, M. F.; Liang, R.; Ball, D.

    2014-12-01

    The U.S. Naval Research Laboratory, in coordination with other groups, has been collecting sea-ice data in the Arctic off the north coast of Alaska with an airborne system employing a radar altimeter, LiDAR and a photogrammetric camera in an effort to obtain wide swaths of measurements coincident with Cryosat-2 footprints. Because the satellite tracks traverse areas of moving pack ice, precise real-time estimates of the ice motion are needed to fly a survey grid that will yield complete data coverage. This requirement led us to develop a method to find the ice motion from the aircraft during the survey. With the advent of real-time orthographic photogrammetric systems, we developed a system that measures the sea ice motion in-flight, and also permits post-process modeling of sea ice velocities to correct the positioning of radar and LiDAR data. For the 2013 and 2014 field seasons, we used this Real Time Ice Motion Estimation (RTIME) system to determine ice motion using Applanix's Inflight Ortho software with an Applanix DSS439 system. Operationally, a series of photos were taken in the survey area. The aircraft then turned around and took more photos along the same line several minutes later. Orthophotos were generated within minutes of collection and evaluated by custom software to find photo footprints and potential overlap. Overlapping photos were passed to the correlation software, which selects a series of "chips" in the first photo and looks for the best matches in the second photo. The correlation results are then passed to a density-based clustering algorithm to determine the offset of the photo pair. To investigate any systematic errors in the photogrammetry, we flew several flight lines over a fixed point on various headings, over an area of non-moving ice in 2013. The orthophotos were run through the correlation software to find any residual offsets, and run through additional software to measure chip positions and offsets relative to the aircraft heading. X- and Y-offsets in situations where one of the chips was near the center of its photo were plotted to find the along- and across-track errors vs. distance from the photo center. Corrections were determined and applied to the survey data, reducing the mean error by about 1 meter. The corrections were applied to all of the subsequent survey data.

  10. A Database for Propagation Models and Conversion to C++ Programming Language

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    The telecommunications system design engineer generally needs the quantification of effects of the propagation medium (definition of the propagation channel) to design an optimal communications system. To obtain the definition of the channel, the systems engineer generally has a few choices. A search of the relevant publications such as the IEEE Transactions, CCIR's, NASA propagation handbook, etc., may be conducted to find the desired channel values. This method may need excessive amounts of time and effort on the systems engineer's part and there is a possibility that the search may not even yield the needed results. To help the researcher and the systems engineers, it was recommended by the conference participants of NASA Propagation Experimenters (NAPEX) XV (London, Ontario, Canada, June 28 and 29, 1991) that a software should be produced that would contain propagation models and the necessary prediction methods of most propagation phenomena. Moreover, the software should be flexible enough for the user to make slight changes to the models without expending a substantial effort in programming. In the past few years, a software was produced to fit these requirements as best as could be done. The software was distributed to all NAPEX participants for evaluation and use, the participant reactions, suggestions etc., were gathered and were used to improve the subsequent releases of the software. The existing database program is in the Microsoft Excel application software and works fine within the guidelines of that environment, however, recently there have been some questions about the robustness and survivability of the Excel software in the ever changing (hopefully improving) world of software packages.

  11. Cultural and Technological Issues and Solutions for Geodynamics Software Citation

    NASA Astrophysics Data System (ADS)

    Heien, E. M.; Hwang, L.; Fish, A. E.; Smith, M.; Dumit, J.; Kellogg, L. H.

    2014-12-01

    Computational software and custom-written codes play a key role in scientific research and teaching, providing tools to perform data analysis and forward modeling through numerical computation. However, development of these codes is often hampered by the fact that there is no well-defined way for the authors to receive credit or professional recognition for their work through the standard methods of scientific publication and subsequent citation of the work. This in turn may discourage researchers from publishing their codes or making them easier for other scientists to use. We investigate the issues involved in citing software in a scientific context, and introduce features that should be components of a citation infrastructure, particularly oriented towards the codes and scientific culture in the area of geodynamics research. The codes used in geodynamics are primarily specialized numerical modeling codes for continuum mechanics problems; they may be developed by individual researchers, teams of researchers, geophysicists in collaboration with computational scientists and applied mathematicians, or by coordinated community efforts such as the Computational Infrastructure for Geodynamics. Some but not all geodynamics codes are open-source. These characteristics are common to many areas of geophysical software development and use. We provide background on the problem of software citation and discuss some of the barriers preventing adoption of such citations, including social/cultural barriers, insufficient technological support infrastructure, and an overall lack of agreement about what a software citation should consist of. We suggest solutions in an initial effort to create a system to support citation of software and promotion of scientific software development.

  12. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    NASA Technical Reports Server (NTRS)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  13. Toward Intelligent Software Defect Detection

    NASA Technical Reports Server (NTRS)

    Benson, Markland J.

    2011-01-01

    Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.

  14. The role of the ADS in software discovery and citation

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto

    2018-01-01

    As the primary index of scholarly content in astronomy and physics, the NASA Astrophysics Data System (ADS) is collaborating with the AAS journals and the Zenodo repository in an effort to promote the preservation of scientific software used in astronomy research and its citation in scholarly publications. In this talk I will discuss how ADS is updating its service infrastructure to allow for the publication, indexing, and citation of software records in scientific articles.

  15. ASSIP Study of Real-Time Safety-Critical Embedded Software-Intensive System Engineering Practices

    DTIC Science & Technology

    2008-02-01

    and assessment 2. product engineering processes 3. tooling processes 6 | CMU/SEI-2008-SR-001 Slide 1 Process Standards IEC/ ISO 12207 Software...and technical effort to align with 12207 IEC/ ISO 15026 System & Software Integrity Levels Generic Safety SAE ARP 4754 Certification Considerations...Process Frameworks in revision – ISO 9001, ISO 9004 – ISO 15288/ ISO 12207 harmonization – RTCA DO-178B, MOD Standard UK 00-56/3, … • Methods & Tools

  16. Guide to data collection

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Guidelines and recommendations are presented for the collection of software development data. Motivation and planning for, and implementation and management of, a data collection effort are discussed. Topics covered include types, sources, and availability of data; methods and costs of data collection; types of analyses supported; and warnings and suggestions based on software engineering laboratory (SEL) experiences. This document is intended as a practical guide for software managers and engineers, abstracted and generalized from 5 years of SEL data collection.

  17. Cost Estimating Cases: Educational Tools for Cost Analysts

    DTIC Science & Technology

    1993-09-01

    only appropriate documentation should be provided. In other words, students should not submit all of the documentation possible using ACEIT , only that...case was their lack of understanding of the ACEIT software used to conduct the estimate. Specifically, many students misinterpreted the cost...estimating relationships (CERs) embedded in the 49 software. Additionally, few of the students were able to properly organize the ACEIT documentation output

  18. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  19. Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center

    NASA Astrophysics Data System (ADS)

    Berger, Thomas

    2016-07-01

    The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.

  20. Software Carpentry: lessons learned

    PubMed Central

    Wilson, Greg

    2016-01-01

    Since its start in 1998, Software Carpentry has evolved from a week-long training course at the US national laboratories into a worldwide volunteer effort to improve researchers' computing skills. This paper explains what we have learned along the way, the challenges we now face, and our plans for the future. PMID:24715981

  1. CyberCIEGE Scenario Illustrating Software Integrity Issues and Management of Air-Gapped Networks in a Military Environment

    DTIC Science & Technology

    2005-12-01

    courses in which students learn programming techniques using gaming software models . As part of the effort, teaching modules and entire courses will......assets. The United States Navy recently implemented new policies to restrict service personnel’s use of commercial, web- based email applications in an

  2. IDEA: Identifying Design Principles in Educational Applets

    ERIC Educational Resources Information Center

    Underwood, Jody S.; Hoadley, Christopher; Lee, Hollylynne Stohl; Hollebrands, Karen; DiGiano, Chris; Renninger, K. Ann

    2005-01-01

    The Internet is increasingly being used as a medium for educational software in the form of miniature applications (e.g., applets) to explore concepts in a domain. One such effort in mathematics education, the Educational Software Components of Tomorrow (ESCOT) project, created 42 miniature applications each consisting of a context, a set of…

  3. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  4. Beyond the Evident Content Goals Part I. Tapping the Depth and Flow of the Educational Undercurrent.

    ERIC Educational Resources Information Center

    Dugdale, Sharon; Kibbey, David

    1990-01-01

    The first in a series of three articles, successful instructional materials from a 15-year software development effort are analyzed and characterized with special attention given to educational experiences intended to shape students' perceptions of the fundamental nature, interconnectedness, and usefulness of mathematics. The software programs…

  5. Enhancement of Spatial Thinking with Virtual Spaces 1.0

    ERIC Educational Resources Information Center

    Hauptman, Hanoch

    2010-01-01

    Developing a software environment to enhance 3D geometric proficiency demands the consideration of theoretical views of the learning process. Simultaneously, this effort requires taking into account the range of tools that technology offers, as well as their limitations. In this paper, we report on the design of Virtual Spaces 1.0 software, a…

  6. The Use of the Software MATLAB To Improve Chemical Engineering Education.

    ERIC Educational Resources Information Center

    Damatto, T.; Maegava, L. M.; Filho, R. Maciel

    In all the Brazilian Universities involved with the project "Prodenge-Reenge", the main objective is to improve teaching and learning procedures for the engineering disciplines. The Chemical Engineering College of Campinas State University focused its effort on the use of engineering softwares. The work developed by this project has…

  7. Man-computer Inactive Data Access System (McIDAS). [design, development, fabrication, and testing

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A technical description is given of the effort to design, develop, fabricate, and test the two dimensional data processing system, McIDAS. The system has three basic sections: an access and data archive section, a control section, and a display section. Areas reported include hardware, system software, and applications software.

  8. Combining analysis with optimization at Langley Research Center. An evolutionary process

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1982-01-01

    The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.

  9. Case study of open-source enterprise resource planning implementation in a small business

    NASA Astrophysics Data System (ADS)

    Olson, David L.; Staley, Jesse

    2012-02-01

    Enterprise resource planning (ERP) systems have been recognised as offering great benefit to some organisations, although they are expensive and problematic to implement. The cost and risk make well-developed proprietorial systems unaffordable to small businesses. Open-source software (OSS) has become a viable means of producing ERP system products. The question this paper addresses is the feasibility of OSS ERP systems for small businesses. A case is reported involving two efforts to implement freely distributed ERP software products in a small US make-to-order engineering firm. The case emphasises the potential of freely distributed ERP systems, as well as some of the hurdles involved in their implementation. The paper briefly reviews highlights of OSS ERP systems, with the primary focus on reporting the case experiences for efforts to implement ERPLite software and xTuple software. While both systems worked from a technical perspective, both failed due to economic factors. While these economic conditions led to imperfect results, the case demonstrates the feasibility of OSS ERP for small businesses. Both experiences are evaluated in terms of risk dimension.

  10. MAVEN Data Analysis and Visualization Toolkits

    NASA Astrophysics Data System (ADS)

    Harter, B., Jr.; DeWolfe, A. W.; Brain, D.; Chaffin, M.

    2017-12-01

    The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. The MAVEN Science Data Center has developed software toolkits for analyzing and visualizing the science data. Our Data Intercomparison and Visualization Development Effort (DIVIDE) toolkit is written in IDL, and utilizes the widely used "tplot" IDL libraries. Recently, we have converted DIVIDE into Python in an effort to increase the accessibility of the MAVEN data. This conversion also necessitated the development of a Python version of the tplot libraries, which we have dubbed "PyTplot". PyTplot is generalized to work with missions beyond MAVEN, and our software is available on Github.

  11. Predictors and Impact of Self-Reported Suboptimal Effort on Estimates of Prevalence of HIV-Associated Neurocognitive Disorders.

    PubMed

    Levine, Andrew J; Martin, Eileen; Sacktor, Ned; Munro, Cynthia; Becker, James

    2017-06-01

    Prevalence estimates of HIV-associated neurocognitive disorders (HAND) may be inflated. Estimates are determined via cohort studies in which participants may apply suboptimal effort on neurocognitive testing, thereby inflating estimates. Additionally, fluctuating HAND severity over time may be related to inconsistent effort. To address these hypotheses, we characterized effort in the Multicenter AIDS Cohort Study. After neurocognitive testing, 935 participants (525 HIV- and 410 HIV+) completed the visual analog effort scale (VAES), rating their effort from 0% to 100%. Those with <100% then indicated the reason(s) for suboptimal effort. K-means cluster analysis established 3 groups: high (mean = 97%), moderate (79%), and low effort (51%). Rates of HAND and other characteristics were compared between the groups. Linear regression examined the predictors of VAES score. Data from 57 participants who completed the VAES at 2 visits were analyzed to characterize the longitudinal relationship between effort and HAND severity. Fifty-two percent of participants reported suboptimal effort (<100%), with no difference between serostatus groups. Common reasons included "tired" (43%) and "distracted" (36%). The lowest effort group had greater asymptomatic neurocognitive impairment and minor neurocognitive disorder diagnosis (25% and 33%) as compared with the moderate (23% and 15%) and the high (12% and 9%) effort groups. Predictors of suboptimal effort were self-reported memory impairment, African American race, and cocaine use. Change in effort between baseline and follow-up correlated with change in HAND severity. Suboptimal effort seems to inflate estimated HAND prevalence and explain fluctuation of severity over time. A simple modification of study protocols to optimize effort is indicated by the results.

  12. Integration, Validation, and Application of a PV Snow Coverage Model in SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine M.; Ryberg, David Severin

    2017-08-01

    Due to the increasing deployment of PV systems in snowy climates, there is significant interest in a method capable of estimating PV losses resulting from snow coverage that has been verified for a variety of system designs and locations. Many independent snow coverage models have been developed over the last 15 years; however, there has been very little effort verifying these models beyond the system designs and locations on which they were based. Moreover, major PV modeling software products have not yet incorporated any of these models into their workflows. In response to this deficiency, we have integrated the methodologymore » of the snow model developed in the paper by Marion et al. (2013) into the National Renewable Energy Laboratory's (NREL) System Advisor Model (SAM). In this work, we describe how the snow model is implemented in SAM and we discuss our demonstration of the model's effectiveness at reducing error in annual estimations for three PV arrays. Next, we use this new functionality in conjunction with a long term historical data set to estimate average snow losses across the United States for two typical PV system designs. The open availability of the snow loss estimation capability in SAM to the PV modeling community, coupled with our results of the nationwide study, will better equip the industry to accurately estimate PV energy production in areas affected by snowfall.« less

  13. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  14. Attributes Effecting Software Testing Estimation; Is Organizational Trust an Issue?

    ERIC Educational Resources Information Center

    Hammoud, Wissam

    2013-01-01

    This quantitative correlational research explored the potential association between the levels of organizational trust and the software testing estimation. This was conducted by exploring the relationships between organizational trust, tester's expertise, organizational technology used, and the number of hours, number of testers, and time-coding…

  15. Lessons Learned from Autonomous Sciencecraft Experiment

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Sherwood, Rob; Tran, Daniel; Cichy, Benjamin; Rabideau, Gregg; Castano, Rebecca; Davies, Ashley; Mandl, Dan; Frye, Stuart; Trout, Bruce; hide

    2005-01-01

    An Autonomous Science Agent has been flying onboard the Earth Observing One Spacecraft since 2003. This software enables the spacecraft to autonomously detect and responds to science events occurring on the Earth such as volcanoes, flooding, and snow melt. The package includes AI-based software systems that perform science data analysis, deliberative planning, and run-time robust execution. This software is in routine use to fly the EO-l mission. In this paper we briefly review the agent architecture and discuss lessons learned from this multi-year flight effort pertinent to deployment of software agents to critical applications.

  16. Development of a Unix/VME data acquisition system

    NASA Astrophysics Data System (ADS)

    Miller, M. C.; Ahern, S.; Clark, S. M.

    1992-01-01

    The current status of a Unix-based VME data acquisition development project is described. It is planned to use existing Fortran data collection software to drive the existing CAMAC electronics via a VME CAMAC branch driver card and associated Daresbury Unix driving software. The first usable Unix driver has been written and produces single-action CAMAC cycles from test software. The data acquisition code has been implemented in test mode under Unix with few problems and effort is now being directed toward finalizing calls to the CAMAC-driving software and ultimate evaluation of the complete system.

  17. Model Driven Engineering

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  18. Effective organizational solutions for implementation of DBMS software packages

    NASA Technical Reports Server (NTRS)

    Jones, D.

    1984-01-01

    The space telescope management information system development effort is a guideline for discussing effective organizational solutions used in implementing DBMS software. Focus is on the importance of strategic planning. The value of constructing an information system architecture to conform to the organization's managerial needs, the need for a senior decision maker, dealing with shifting user requirements, and the establishment of a reliable working relationship with the DBMS vendor are examined. Requirements for a schedule to demonstrate progress against a defined timeline and the importance of continued monitoring for production software control, production data control, and software enhancements are also discussed.

  19. Study of fault-tolerant software technology

    NASA Technical Reports Server (NTRS)

    Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.

    1984-01-01

    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.

  20. Software Management for the NOνAExperiment

    NASA Astrophysics Data System (ADS)

    Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.

    2015-12-01

    The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.

  1. Analysing the Zenith Tropospheric Delay Estimates in On-line Precise Point Positioning (PPP) Services and PPP Software Packages.

    PubMed

    Mendez Astudillo, Jorge; Lau, Lawrence; Tang, Yu-Ting; Moore, Terry

    2018-02-14

    As Global Navigation Satellite System (GNSS) signals travel through the troposphere, a tropospheric delay occurs due to a change in the refractive index of the medium. The Precise Point Positioning (PPP) technique can achieve centimeter/millimeter positioning accuracy with only one GNSS receiver. The Zenith Tropospheric Delay (ZTD) is estimated alongside with the position unknowns in PPP. Estimated ZTD can be very useful for meteorological applications, an example is the estimation of water vapor content in the atmosphere from the estimated ZTD. PPP is implemented with different algorithms and models in online services and software packages. In this study, a performance assessment with analysis of ZTD estimates from three PPP online services and three software packages is presented. The main contribution of this paper is to show the accuracy of ZTD estimation achievable in PPP. The analysis also provides the GNSS users and researchers the insight of the processing algorithm dependence and impact on PPP ZTD estimation. Observation data of eight whole days from a total of nine International GNSS Service (IGS) tracking stations spread in the northern hemisphere, the equatorial region and the southern hemisphere is used in this analysis. The PPP ZTD estimates are compared with the ZTD obtained from the IGS tropospheric product of the same days. The estimates of two of the three online PPP services show good agreement (<1 cm) with the IGS ZTD values at the northern and southern hemisphere stations. The results also show that the online PPP services perform better than the selected PPP software packages at all stations.

  2. Apparent annual survival estimates of tropical songbirds better reflect life history variation when based on intensive field methods

    USGS Publications Warehouse

    Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.

    2017-01-01

    AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.

  3. A study of software standards used in the avionics industry

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1994-01-01

    Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.

  4. OAST Space Theme Workshop. Volume 3: Working group summary. 4: Software (E-4). A. Summary. B. Technology needs (form 1). C. Priority assessment (form 2)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Only a few efforts are currently underway to develop an adequate technology base for the various themes. Particular attention must be given to software commonality and evolutionary capability, to increased system integrity and autonomy; and to improved communications among the program users, the program developers, and the programs themselves. There is a need for quantum improvement in software development methods and increasing the awareness of software by all concerned. Major thrusts identified include: (1) data and systems management; (2) software technology for autonomous systems; (3) technology and methods for improving the software development process; (4) advances related to systems of software elements including their architecture, their attributes as systems, and their interfaces with users and other systems; and (5) applications of software including both the basic algorithms used in a number of applications and the software specific to a particular theme or discipline area. The impact of each theme on software is assessed.

  5. INTERSPECIES CORRELATION ESTIMATION (ICE) FOR ACUTE TOXICITY TO AQUATIC ORGANISMS AND WILDLIFE. II. USER MANUAL AND SOFTWARE

    EPA Science Inventory

    Asfaw, Amha, Mark R. Ellersieck and Foster L. Mayer. 2003. Interspecies Correlation Estimations (ICE) for Acute Toxicity to Aquatic Organisms and Wildlife. II. User Manual and Software. EPA/600/R-03/106. U.S. Environmental Protection Agency, National Health and Environmental Effe...

  6. Analysis of Schedule Determination in Software Program Development and Software Development Estimation Models

    DTIC Science & Technology

    1988-09-01

    20 SLIM . . . . .e & . . . . . . . . . . . . 24 SoftCost-R . . . . . . . . . . . . . . . 26 SPQR /20 . . . . . . . . . . .*. . . . . 28...PRICB-8 . . . . . . . . . . .. . 83 softCost-R ............. 84 SPQR /20 . . . . . . . . . . . . 0 . 84 System-3 . . . . . . . . . . . . . . 85 Summry...128 Appendix G: SoftCost-R Input Values . . . . . . . . . . 129 Appendix H: SoftCost-R Resources Estimate . . . . . . . 131 Appendix I: SPQR

  7. Towards a publicly available, map-based regional software tool to estimate unregulated daily streamflow at ungauged rivers

    USGS Publications Warehouse

    Archfield, Stacey A.; Steeves, Peter A.; Guthrie, John D.; Ries, Kernell G.

    2013-01-01

    Streamflow information is critical for addressing any number of hydrologic problems. Often, streamflow information is needed at locations that are ungauged and, therefore, have no observations on which to base water management decisions. Furthermore, there has been increasing need for daily streamflow time series to manage rivers for both human and ecological functions. To facilitate negotiation between human and ecological demands for water, this paper presents the first publicly available, map-based, regional software tool to estimate historical, unregulated, daily streamflow time series (streamflow not affected by human alteration such as dams or water withdrawals) at any user-selected ungauged river location. The map interface allows users to locate and click on a river location, which then links to a spreadsheet-based program that computes estimates of daily streamflow for the river location selected. For a demonstration region in the northeast United States, daily streamflow was, in general, shown to be reliably estimated by the software tool. Estimating the highest and lowest streamflows that occurred in the demonstration region over the period from 1960 through 2004 also was accomplished but with more difficulty and limitations. The software tool provides a general framework that can be applied to other regions for which daily streamflow estimates are needed.

  8. A CMMI-based approach for medical software project life cycle study.

    PubMed

    Chen, Jui-Jen; Su, Wu-Chen; Wang, Pei-Wen; Yen, Hung-Chi

    2013-01-01

    In terms of medical techniques, Taiwan has gained international recognition in recent years. However, the medical information system industry in Taiwan is still at a developing stage compared with the software industries in other nations. In addition, systematic development processes are indispensable elements of software development. They can help developers increase their productivity and efficiency and also avoid unnecessary risks arising during the development process. Thus, this paper presents an application of Light-Weight Capability Maturity Model Integration (LW-CMMI) to Chang Gung Medical Research Project (CMRP) in the Nuclear medicine field. This application was intended to integrate user requirements, system design and testing of software development processes into three layers (Domain, Concept and Instance) model. Then, expressing in structural System Modeling Language (SysML) diagrams and converts part of the manual effort necessary for project management maintenance into computational effort, for example: (semi-) automatic delivery of traceability management. In this application, it supports establishing artifacts of "requirement specification document", "project execution plan document", "system design document" and "system test document", and can deliver a prototype of lightweight project management tool on the Nuclear Medicine software project. The results of this application can be a reference for other medical institutions in developing medical information systems and support of project management to achieve the aim of patient safety.

  9. On Quality and Measures in Software Engineering

    ERIC Educational Resources Information Center

    Bucur, Ion I.

    2006-01-01

    Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…

  10. Fault Tolerant Software Technology for Distributed Computer Systems

    DTIC Science & Technology

    1989-03-01

    RAY.) &-TR-88-296 I Fin;.’ Technical Report ,r 19,39 i A28 3329 F’ULT TOLERANT SOFTWARE TECHNOLOGY FOR DISTRIBUTED COMPUTER SYSTEMS Georgia Institute...GrfisABN 34-70IiWftlI NO0. IN?3. NO IACCESSION NO. 158 21 7 11. TITLE (Incld security Cassification) FAULT TOLERANT SOFTWARE FOR DISTRIBUTED COMPUTER ...Technology for Distributed Computing Systems," a two year effort performed at Georgia Institute of Technology as part of the Clouds Project. The Clouds

  11. Physical Therapy Machine

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Loredan Biomedical, Inc.'s LIDO, a computerized physical therapy system, was purchased by NASA in 1985 for evaluation as a Space Station Freedom exercise program. In 1986, while involved in an ARC muscle conditioning project, Malcom Bond, Loredan's chairman, designed an advanced software package for NASA which became the basis for LIDOSOFT software used in the commercially available system. The system employs a "proprioceptive" software program which perceives internal body conditions, induces perturbations to muscular effort and evaluates the response. Biofeedback on a screen allows a patient to observe his own performance.

  12. A Survey On Management Of Software Engineering In Japan

    NASA Astrophysics Data System (ADS)

    Kadono, Yasuo; Tsubaki, Hiroe; Tsuruho, Seishiro

    2008-05-01

    The purpose of this study is to clarity the mechanism of how software engineering capabilities relate to the business performance of IT vendors in Japan. To do this, we developed a structural model using factors related to software engineering, business performance and competitive environment. By analyzing the data collected from 78 major IT vendors in Japan, we found that superior deliverables and business performance were correlated with the effort expended particularly on human resource development, quality assurance, research and development and process improvement.

  13. The 25 kW power module evolution study. Part 3: Conceptual design for power module evolution. Volume 6: WBS and dictionary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Program elements of the power module (PM) system, are identified, structured, and defined according to the planned work breakdown structure. Efforts required to design, develop, manufacture, test, checkout, launch and operate a protoflight assembled 25 kW, 50 kW and 100 kW PM include the preparation and delivery of related software, government furnished equipment, space support equipment, ground support equipment, launch site verification software, orbital verification software, and all related data items.

  14. ProbCD: enrichment analysis accounting for categorization uncertainty.

    PubMed

    Vêncio, Ricardo Z N; Shmulevich, Ilya

    2007-10-12

    As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R-based software to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/. We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation.

  15. A new paradigm on battery powered embedded system design based on User-Experience-Oriented method

    NASA Astrophysics Data System (ADS)

    Wang, Zhuoran; Wu, Yue

    2014-03-01

    The battery sustainable time has been an active research topic recently for the development of battery powered embedded products such as tablets and smart phones, which are determined by the battery capacity and power consumption. Despite numerous efforts on the improvement of battery capacity in the field of material engineering, the power consumption also plays an important role and easier to ameliorate in delivering a desirable user-experience, especially considering the moderate advancement on batteries for decades. In this study, a new Top-Down modelling method, User-Experience-Oriented Battery Powered Embedded System Design Paradigm, is proposed to estimate the target average power consumption, to guide the hardware and software design, and eventually to approach the theoretical lowest power consumption that the application is still able to provide the full functionality. Starting from the 10-hour sustainable time standard, average working current is defined with battery design capacity and set as a target. Then an implementation is illustrated from both hardware perspective, which is summarized as Auto-Gating power management, and from software perspective, which introduces a new algorithm, SleepVote, to guide the system task design and scheduling.

  16. Examining Reuse in LaSRS++-Based Projects

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2001-01-01

    NASA Langley Research Center (LaRC) developed the Langley Standard Real-Time Simulation in C++ (LaSRS++) to consolidate all software development for its simulation facilities under one common framework. A common framework promised a decrease in the total development effort for a new simulation by encouraging software reuse. To judge the success of LaSRS++ in this regard, reuse metrics were extracted from 11 aircraft models. Three methods that employ static analysis of the code were used to identify the reusable components. For the method that provides the best estimate, reuse levels fall between 66% and 95% indicating a high degree of reuse. Additional metrics provide insight into the extent of the foundation that LaSRS++ provides to new simulation projects. When creating variants of an aircraft, LaRC developers use object-oriented design to manage the aircraft as a reusable resource. Variants modify the aircraft for a research project or embody an alternate configuration of the aircraft. The variants inherit from the aircraft model. The variants use polymorphism to extend or redefine aircraft behaviors to meet the research requirements or to match the alternate configuration. Reuse level metrics were extracted from 10 variants. Reuse levels of aircraft by variants were 60% - 99%.

  17. Meteor44 Video Meteor Photometry

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.

    2004-01-01

    Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.

  18. Collected software engineering papers, volume 8

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) during the period November 1989 through October 1990 is presented. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. Additional information about the SEL and its research efforts may be obtained from the sources listed in the bibliography. The seven presented papers are grouped into four major categories: (1) experimental research and evaluation of software measurement; (2) studies on models for software reuse; (3) a software tool evaluation; and (4) Ada technology and studies in the areas of reuse and specification.

  19. Space Shuttle Software Development and Certification

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Henderson, Johnnie A

    2000-01-01

    Man-rated software, "software which is in control of systems and environments upon which human life is critically dependent," must be highly reliable. The Space Shuttle Primary Avionics Software System is an excellent example of such a software system. Lessons learn from more than 20 years of effort have identified basic elements that must be present to achieve this high degree of reliability. The elements include rigorous application of appropriate software development processes, use of trusted tools to support those processes, quantitative process management, and defect elimination and prevention. This presentation highlights methods used within the Space Shuttle project and raises questions that must be addressed to provide similar success in a cost effective manner on future long-term projects where key application development tools are COTS rather than internally developed custom application development tools

  20. Mining Software Usage with the Automatic Library Tracking Database (ALTD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadri, Bilel; Fahey, Mark R

    2013-01-01

    Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batchmore » job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.« less

  1. Application of software technology to automatic test data analysis

    NASA Technical Reports Server (NTRS)

    Stagner, J. R.

    1991-01-01

    The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.

  2. Success Rates by Software Development Methodology in Information Technology Project Management: A Quantitative Analysis

    ERIC Educational Resources Information Center

    Wright, Gerald P.

    2013-01-01

    Despite over half a century of Project Management research, project success rates are still too low. Organizations spend a tremendous amount of valuable resources on Information Technology projects and seek to maximize the utility gained from their efforts. The author investigated the impact of software development methodology choice on ten…

  3. HALOE test and evaluation software

    NASA Technical Reports Server (NTRS)

    Edmonds, W.; Natarajan, S.

    1987-01-01

    Computer programming, system development and analysis efforts during this contract were carried out in support of the Halogen Occultation Experiment (HALOE) at NASA/Langley. Support in the major areas of data acquisition and monitoring, data reduction and system development are described along with a brief explanation of the HALOE project. Documented listings of major software are located in the appendix.

  4. An Empirical Investigation of Pre-Project Partnering Activities on Project Performance in the Software Industry

    ERIC Educational Resources Information Center

    Proffitt, Curtis K.

    2012-01-01

    Project failure remains a challenge within the software development field especially during the early stages of the IT project development. Despite the herculean efforts by project managers and organizations to identify and offset problems, projects remain plagued with issues. If these challenges are not mitigated, to a successful degree,…

  5. Vertical and Horizontal Integration of Laboratory Curricula and Course Projects across the Electronic Engineering Technology Program

    ERIC Educational Resources Information Center

    Zhan, Wei; Goulart, Ana; Morgan, Joseph A.; Porter, Jay R.

    2011-01-01

    This paper discusses the details of the curricular development effort with a focus on the vertical and horizontal integration of laboratory curricula and course projects within the Electronic Engineering Technology (EET) program at Texas A&M University. Both software and hardware aspects are addressed. A common set of software tools are…

  6. Svetovid--Interactive Development and Submission System with Prevention of Academic Collusion in Computer Programming

    ERIC Educational Resources Information Center

    Pribela, Ivan; Ivanovic, Mirjana; Budimac, Zoran

    2009-01-01

    This paper discusses Svetovid, cross-platform software that helps instructors to assess the amount of effort put into practical exercises and exams in courses related to computer programming. The software was developed as an attempt at solving problems associated with practical exercises and exams. This paper discusses the design and use of…

  7. Sustainable Software Decisions for Long-term Projects (Invited)

    NASA Astrophysics Data System (ADS)

    Shepherd, A.; Groman, R. C.; Chandler, C. L.; Gaylord, D.; Sun, M.

    2013-12-01

    Adopting new, emerging technologies can be difficult for established projects that are positioned to exist for years to come. In some cases the challenge lies in the pre-existing software architecture. In others, the challenge lies in the fluctuation of resources like people, time and funding. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) was created in late 2006 by combining the data management offices for the U.S. GLOBEC and U.S. JGOFS programs to publish data for researchers funded by the National Science Foundation (NSF). Since its inception, BCO-DMO has been supporting access and discovery of these data through web-accessible software systems, and the office has worked through many of the challenges of incorporating new technologies into its software systems. From migrating human readable, flat file metadata storage into a relational database, and now, into a content management system (Drupal) to incorporating controlled vocabularies, new technologies can radically affect the existing software architecture. However, through the use of science-driven use cases, effective resource management, and loosely coupled software components, BCO-DMO has been able to adapt its existing software architecture to adopt new technologies. One of the latest efforts at BCO-DMO revolves around applying metadata semantics for publishing linked data in support of data discovery. This effort primarily affects the metadata web interface software at http://bco-dmo.org and the geospatial interface software at http://mapservice.bco-dmo.org/. With guidance from science-driven use cases and consideration of our resources, implementation decisions are made using a strategy to loosely couple the existing software systems to the new technologies. The results of this process led to the use of REST web services and a combination of contributed and custom Drupal modules for publishing BCO-DMO's content using the Resource Description Framework (RDF) via an instance of the Virtuoso Open-Source triplestore.

  8. PAF: A software tool to estimate free-geometry extended bodies of anomalous pressure from surface deformation data

    NASA Astrophysics Data System (ADS)

    Camacho, A. G.; Fernández, J.; Cannavò, F.

    2018-02-01

    We present a software package to carry out inversions of surface deformation data (any combination of InSAR, GPS, and terrestrial data, e.g., EDM, levelling) as produced by 3D free-geometry extended bodies with anomalous pressure changes. The anomalous structures are described as an aggregation of elementary cells (whose effects are estimated as coming from point sources) in an elastic half space. The linear inverse problem (considering some simple regularization conditions) is solved by means of an exploratory approach. This software represents the open implementation of a previously published methodology (Camacho et al., 2011). It can be freely used with large data sets (e.g. InSAR data sets) or with data coming from small control networks (e.g. GPS monitoring data), mainly in volcanic areas, to estimate the expected pressure bodies representing magmatic intrusions. Here, the software is applied to some real test cases.

  9. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne R.; Nessel, James A.

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a Q/V-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  10. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  12. Evaluation and selection of security products for authentication of computer software

    NASA Astrophysics Data System (ADS)

    Roenigk, Mark W.

    2000-04-01

    Software Piracy is estimated to cost software companies over eleven billion dollars per year in lost revenue worldwide. Over fifty three percent of all intellectual property in the form of software is pirated on a global basis. Software piracy has a dramatic effect on the employment figures for the information industry as well. In the US alone, over 130,000 jobs are lost annually as a result of software piracy.

  13. GPM Ground Validation: Pre to Post-Launch Era

    NASA Astrophysics Data System (ADS)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and place focus on evaluation of year-1 post-launch GPM satellite datasets including Level II GPROF, DPR and Combined algorithms, and Level III IMERG products.

  14. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    ERIC Educational Resources Information Center

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  15. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  16. Computerized Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency.

    PubMed

    Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki

    2017-12-01

    The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.

  17. Identifying Predictors of Achievement in the Newly Defined Information Literacy: A Neural Network Analysis

    ERIC Educational Resources Information Center

    Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.

    2009-01-01

    Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…

  18. Software Development Processes Applied to Computational Icing Simulation

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Potapezuk, Mark G.; Mellor, Pamela A.

    1999-01-01

    The development of computational icing simulation methods is making the transition form the research to common place use in design and certification efforts. As such, standards of code management, design validation, and documentation must be adjusted to accommodate the increased expectations of the user community with respect to accuracy, reliability, capability, and usability. This paper discusses these concepts with regard to current and future icing simulation code development efforts as implemented by the Icing Branch of the NASA Lewis Research Center in collaboration with the NASA Lewis Engineering Design and Analysis Division. With the application of the techniques outlined in this paper, the LEWICE ice accretion code has become a more stable and reliable software product.

  19. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  20. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  1. Proposal for a CLIPS software library

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1991-01-01

    This paper is a proposal to create a software library for the C Language Integrated Production System (CLIPS) expert system shell developed by NASA. Many innovative ideas for extending CLIPS were presented at the First CLIPS Users Conference, including useful user and database interfaces. CLIPS developers would benefit from a software library of reusable code. The CLIPS Users Group should establish a software library-- a course of action to make that happen is proposed. Open discussion to revise this library concept is essential, since only a group effort is likely to succeed. A response form intended to solicit opinions and support from the CLIPS community is included.

  2. Generic control software connecting astronomical instruments to the reflective memory data recording system of VLTI - bossvlti

    NASA Astrophysics Data System (ADS)

    Pozna, E.; Ramirez, A.; Mérand, A.; Mueller, A.; Abuter, R.; Frahm, R.; Morel, S.; Schmid, C.; Duc, T. Phan; Delplancke-Ströbele, F.

    2014-07-01

    The quality of data obtained by VLTI instruments may be refined by analyzing the continuous data supplied by the Reflective Memory Network (RMN). Based on 5 years experience providing VLTI instruments (PACMAN, AMBER, MIDI) with RMN data, the procedure has been generalized to make the synchronization with observation trouble-free. The present software interface saves not only months of efforts for each instrument but also provides the benefits of software frameworks. Recent applications (GRAVITY, MATISSE) supply feedback for the software to evolve. The paper highlights the way common features been identified to be able to offer reusable code in due course.

  3. Introduction: Cybersecurity and Software Assurance Minitrack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, Luanne; George, Richard; Linger, Richard C

    Modern society is dependent on software systems of remarkable scope and complexity. Yet methods for assuring their security and functionality have not kept pace. The result is persistent compromises and failures despite best efforts. Cybersecurity methods must work together for situational awareness, attack prevention and detection, threat attribution, minimization of consequences, and attack recovery. Because defective software cannot be secure, assurance technologies must play a central role in cybersecurity approaches. There is increasing recognition of the need for rigorous methods for cybersecurity and software assurance. The goal of this minitrack is to develop science foundations, technologies, and practices that canmore » improve the security and dependability of complex systems.« less

  4. Software Development and Test Methodology for a Distributed Ground System

    NASA Technical Reports Server (NTRS)

    Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.

  5. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  6. Sustaining an Online, Shared Community Resource for Models, Robust Open source Software Tools and Data for Volcanology - the Vhub Experience

    NASA Astrophysics Data System (ADS)

    Patra, A. K.; Valentine, G. A.; Bursik, M. I.; Connor, C.; Connor, L.; Jones, M.; Simakov, N.; Aghakhani, H.; Jones-Ivey, R.; Kosar, T.; Zhang, B.

    2015-12-01

    Over the last 5 years we have created a community collaboratory Vhub.org [Palma et al, J. App. Volc. 3:2 doi:10.1186/2191-5040-3-2] as a place to find volcanology-related resources, and a venue for users to disseminate tools, teaching resources, data, and an online platform to support collaborative efforts. As the community (current active users > 6000 from an estimated community of comparable size) embeds the tools in the collaboratory into educational and research workflows it became imperative to: a) redesign tools into robust, open source reusable software for online and offline usage/enhancement; b) share large datasets with remote collaborators and other users seamlessly with security; c) support complex workflows for uncertainty analysis, validation and verification and data assimilation with large data. The focus on tool development/redevelopment has been twofold - firstly to use best practices in software engineering and new hardware like multi-core and graphic processing units. Secondly we wish to enhance capabilities to support inverse modeling, uncertainty quantification using large ensembles and design of experiments, calibration, validation. Among software engineering practices we practice are open source facilitating community contributions, modularity and reusability. Our initial targets are four popular tools on Vhub - TITAN2D, TEPHRA2, PUFF and LAVA. Use of tools like these requires many observation driven data sets e.g. digital elevation models of topography, satellite imagery, field observations on deposits etc. These data are often maintained in private repositories that are privately shared by "sneaker-net". As a partial solution to this we tested mechanisms using irods software for online sharing of private data with public metadata and access limits. Finally, we adapted use of workflow engines (e.g. Pegasus) to support the complex data and computing workflows needed for usage like uncertainty quantification for hazard analysis using physical models.

  7. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  8. Software engineering methodologies and tools

    NASA Technical Reports Server (NTRS)

    Wilcox, Lawrence M.

    1993-01-01

    Over the years many engineering disciplines have developed, including chemical, electronic, etc. Common to all engineering disciplines is the use of rigor, models, metrics, and predefined methodologies. Recently, a new engineering discipline has appeared on the scene, called software engineering. For over thirty years computer software has been developed and the track record has not been good. Software development projects often miss schedules, are over budget, do not give the user what is wanted, and produce defects. One estimate is there are one to three defects per 1000 lines of deployed code. More and more systems are requiring larger and more complex software for support. As this requirement grows, the software development problems grow exponentially. It is believed that software quality can be improved by applying engineering principles. Another compelling reason to bring the engineering disciplines to software development is productivity. It has been estimated that productivity of producing software has only increased one to two percent a year in the last thirty years. Ironically, the computer and its software have contributed significantly to the industry-wide productivity, but computer professionals have done a poor job of using the computer to do their job. Engineering disciplines and methodologies are now emerging supported by software tools that address the problems of software development. This paper addresses some of the current software engineering methodologies as a backdrop for the general evaluation of computer assisted software engineering (CASE) tools from actual installation of and experimentation with some specific tools.

  9. Sampling effort and estimates of species richness based on prepositioned area electrofisher samples

    USGS Publications Warehouse

    Bowen, Z.H.; Freeman, Mary C.

    1998-01-01

    Estimates of species richness based on electrofishing data are commonly used to describe the structure of fish communities. One electrofishing method for sampling riverine fishes that has become popular in the last decade is the prepositioned area electrofisher (PAE). We investigated the relationship between sampling effort and fish species richness at seven sites in the Tallapoosa River system, USA based on 1,400 PAE samples collected during 1994 and 1995. First, we estimated species richness at each site using the first-order jackknife and compared observed values for species richness and jackknife estimates of species richness to estimates based on historical collection data. Second, we used a permutation procedure and nonlinear regression to examine rates of species accumulation. Third, we used regression to predict the number of PAE samples required to collect the jackknife estimate of species richness at each site during 1994 and 1995. We found that jackknife estimates of species richness generally were less than or equal to estimates based on historical collection data. The relationship between PAE electrofishing effort and species richness in the Tallapoosa River was described by a positive asymptotic curve as found in other studies using different electrofishing gears in wadable streams. Results from nonlinear regression analyses indicted that rates of species accumulation were variable among sites and between years. Across sites and years, predictions of sampling effort required to collect jackknife estimates of species richness suggested that doubling sampling effort (to 200 PAEs) would typically increase observed species richness by not more than six species. However, sampling effort beyond about 60 PAE samples typically increased observed species richness by < 10%. We recommend using historical collection data in conjunction with a preliminary sample size of at least 70 PAE samples to evaluate estimates of species richness in medium-sized rivers. Seventy PAE samples should provide enough information to describe the relationship between sampling effort and species richness and thus facilitate evaluation of a sampling effort.

  10. DoD Key Technologies Plan

    DTIC Science & Technology

    1992-07-01

    methodologies ; software performance analysis; software testing; and concurrent languages. Finally, efforts in algorithms, which are primarily designed to upgrade...These codes provide a powerful research tool for testing new concepts and designs prior to experimental implementation. DoE’s laser program has also...development, and specially designed production facilities. World leadership in bth non -fluorinated and fluorinated materials resides in the U.S. but Japan

  11. An Investigation of Techniques for Detecting Data Anomalies in Earned Value Management Data

    DTIC Science & Technology

    2011-12-01

    Management Studio Harte Hanks Trillium Software Trillium Software System IBM Info Sphere Foundation Tools Informatica Data Explorer Informatica ...Analyst Informatica Developer Informatica Administrator Pitney Bowes Business Insight Spectrum SAP BusinessObjects Data Quality Management DataFlux...menting quality monitoring efforts and tracking data quality improvements Informatica http://www.informatica.com/products_services/Pages/index.aspx

  12. Computer software documentation

    NASA Technical Reports Server (NTRS)

    Comella, P. A.

    1973-01-01

    A tutorial in the documentation of computer software is presented. It presents a methodology for achieving an adequate level of documentation as a natural outgrowth of the total programming effort commencing with the initial problem statement and definition and terminating with the final verification of code. It discusses the content of adequate documentation, the necessity for such documentation and the problems impeding achievement of adequate documentation.

  13. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, Charlie; Crook, Jerry

    1997-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.

  14. Usability analysis of 2D graphics software for designing technical clothing.

    PubMed

    Teodoroski, Rita de Cassia Clark; Espíndola, Edilene Zilma; Silva, Enéias; Moro, Antônio Renato Pereira; Pereira, Vera Lucia D V

    2012-01-01

    With the advent of technology, the computer became a working tool increasingly present in companies. Its purpose is to increase production and reduce the inherent errors in manual production. The aim of this study was to analyze the usability of 2D graphics software in creating clothing designs by a professional during his work. The movements of the mouse, keyboard and graphical tools were monitored in real time by software Camtasia 7® installed on the user's computer. To register the use of mouse and keyboard we used auxiliary software called MouseMeter®, which quantifies the number of times they pressed the right, middle and left mouse's buttons, the keyboard and also the distance traveled in meters by the cursor on the screen. Data was collected in periods of 15 minutes, 1 hour and 8 hours, consecutively. The results showed that the job is considered repetitive and high demands physical efforts, which can lead to the appearance of repetitive strain injuries. Thus, the goal of minimizing operator efforts and thereby enhance the usability of the examined tool, becomes imperative to replace the mouse by a device called tablet, which also offers an electronic pen and a drawing platform for design development.

  15. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  16. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982

  17. Well-to-wheel greenhouse gas emissions and energy use analysis of hypothetical fleet of electrified vehicles in Canada and the U.S

    NASA Astrophysics Data System (ADS)

    Maduro, Miguelangel

    The shift to strong hybrid and electrified vehicle architectures engenders controversy and brings about many unanswered questions. It is unclear whether developed markets will have the infrastructure in place to support and successfully implement them. To date, limited effort has been made to comprehend if the energy and transportation solutions that work well for one city or geographic region may extend broadly. A region's capacity to supply a fleet of EVs, or plug-in hybrid vehicles with the required charging infrastructure, does not necessarily make such vehicle architectures an optimal solution. In this study, a mix of technologies ranging from HEV to PHEV and EREV through to Battery Electric Vehicles were analyzed and set in three Canadian Provinces and 3 U.S. Regions for the year 2020. Government agency developed environmental software tools were used to estimate greenhouse gas emissions and energy use. Projected vehicle technology shares were employed to estimate regional environmental implications. Alternative vehicle technologies and fuels are recommended for each region based on local power generation schemes.

  18. Impact of Growing Business on Software Processes

    NASA Astrophysics Data System (ADS)

    Nikitina, Natalja; Kajko-Mattsson, Mira

    When growing their businesses, software organizations should not only put effort into developing and executing their business strategies, but also into managing and improving their internal software development processes and aligning them with business growth strategies. It is only in this way they may confirm that their businesses grow in a healthy and sustainable way. In this paper, we map out one software company's business growth on the course of its historical events and identify its impact on the company's software production processes and capabilities. The impact concerns benefits, challenges, problems and lessons learned. The most important lesson learned is that although business growth has become a stimulus for starting thinking and improving software processes, the organization lacked guidelines aiding it in and aligning it to business growth. Finally, the paper generates research questions providing a platform for future research.

  19. Space Station Mission Planning System (MPS) development study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Klus, W. J.

    1987-01-01

    The basic objective of the Space Station (SS) Mission Planning System (MPS) Development Study was to define a baseline Space Station mission plan and the associated hardware and software requirements for the system. A detailed definition of the Spacelab (SL) payload mission planning process and SL Mission Integration Planning System (MIPS) software was derived. A baseline concept was developed for performing SS manned base payload mission planning, and it was consistent with current Space Station design/operations concepts and philosophies. The SS MPS software requirements were defined. Also, requirements for new software include candidate programs for the application of artificial intelligence techniques to capture and make more effective use of mission planning expertise. A SS MPS Software Development Plan was developed which phases efforts for the development software to implement the SS mission planning concept.

  20. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    PubMed

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  1. Data-Driven Decision Making as a Tool to Improve Software Development Productivity

    ERIC Educational Resources Information Center

    Brown, Mary Erin

    2013-01-01

    The worldwide software project failure rate, based on a survey of information technology software manager's view of user satisfaction, product quality, and staff productivity, is estimated to be between 24% and 36% and software project success has not kept pace with the advances in hardware. The problem addressed by this study was the limited…

  2. An information model for use in software management estimation and prediction

    NASA Technical Reports Server (NTRS)

    Li, Ningda R.; Zelkowitz, Marvin V.

    1993-01-01

    This paper describes the use of cluster analysis for determining the information model within collected software engineering development data at the NASA/GSFC Software Engineering Laboratory. We describe the Software Management Environment tool that allows managers to predict development attributes during early phases of a software project and the modifications we propose to allow it to develop dynamic models for better predictions of these attributes.

  3. Remediating Non-Positive Definite State Covariances for Collision Probability Estimation

    NASA Technical Reports Server (NTRS)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis team estimates the probability of collision (Pc) for a set of Earth-orbiting satellites. The Pc estimation software processes satellite position+velocity states and their associated covariance matri-ces. On occasion, the software encounters non-positive definite (NPD) state co-variances, which can adversely affect or prevent the Pc estimation process. Inter-polation inaccuracies appear to account for the majority of such covariances, alt-hough other mechanisms contribute also. This paper investigates the origin of NPD state covariance matrices, three different methods for remediating these co-variances when and if necessary, and the associated effects on the Pc estimation process.

  4. ACUTE-TO-CHRONIC ESTIMATION (ACE V 2.0) WITH TIME-CONCENTRATION-EFFECT MODELS: USER MANUAL AND SOFTWARE

    EPA Science Inventory

    Ellersieck, Mark R., Amha Asfaw, Foster L. Mayer, Gary F. Krause, Kai Sun and Gunhee Lee. 2003. Acute-to-Chronic Estimation (ACE v2.0) with Time-Concentration-Effect Models: User Manual and Software. EPA/600/R-03/107. U.S. Environmental Protection Agency, National Health and Envi...

  5. High Assurance Software

    DTIC Science & Technology

    2013-10-22

    CONGRESSIONAL ) HIGH ASSURANCE SOFTWARE WILLIAM MAHONEY UNIVERSITY OF NEBRASKA 10/22/2013 Final Report DISTRIBUTION A: Distribution approved for ...0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to

  6. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  7. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  8. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  9. The pLISA project in ASTERICS

    NASA Astrophysics Data System (ADS)

    De Bonis, Giulia; Bozza, Cristiano

    2017-03-01

    In the framework of Horizon 2020, the European Commission approved the ASTERICS initiative (ASTronomy ESFRI and Research Infrastructure CluSter) to collect knowledge and experiences from astronomy, astrophysics and particle physics and foster synergies among existing research infrastructures and scientific communities, hence paving the way for future ones. ASTERICS aims at producing a common set of tools and strategies to be applied in Astronomy ESFRI facilities. In particular, it will target the so-called multi-messenger approach to combine information from optical and radio telescopes, photon counters and neutrino telescopes. pLISA is a software tool under development in ASTERICS to help and promote machine learning as a unified approach to multivariate analysis of astrophysical data and signals. The library will offer a collection of classification parameters, estimators, classes and methods to be linked and used in reconstruction programs (and possibly also extended), to characterize events in terms of particle identification and energy. The pLISA library aims at offering the software infras tructure for applications developed inside different experiments and has been designed with an effort to extrapolate general, physics-related estimators from the specific features of the data model related to each particular experiment. pLISA is oriented towards parallel computing architectures, with awareness of the opportunity of using GPUs as accelerators demanding specifically optimized algorithms and to reduce the costs of pro cessing hardware requested for the reconstruction tasks. Indeed, a fast (ideally, real-time) reconstruction can open the way for the development or improvement of alert systems, typically required by multi-messenger search programmes among the different experi mental facilities involved in ASTERICS.

  10. Native American Admixture in the Quebec Founder Population

    PubMed Central

    Moreau, Claudia; Lefebvre, Jean-François; Jomphe, Michèle; Bhérer, Claude; Ruiz-Linares, Andres; Vézina, Hélène; Roy-Gagnon, Marie-Hélène; Labuda, Damian

    2013-01-01

    For years, studies of founder populations and genetic isolates represented the mainstream of genetic mapping in the effort to target genetic defects causing Mendelian disorders. The genetic homogeneity of such populations as well as relatively homogeneous environmental exposures were also seen as primary advantages in studies of genetic susceptibility loci that underlie complex diseases. European colonization of the St-Lawrence Valley by a small number of settlers, mainly from France, resulted in a founder effect reflected by the appearance of a number of population-specific disease-causing mutations in Quebec. The purported genetic homogeneity of this population was recently challenged by genealogical and genetic analyses. We studied one of the contributing factors to genetic heterogeneity, early Native American admixture that was never investigated in this population before. Consistent admixture estimates, in the order of one per cent, were obtained from genome-wide autosomal data using the ADMIXTURE and HAPMIX software, as well as with the fastIBD software evaluating the degree of the identity-by-descent between Quebec individuals and Native American populations. These genomic results correlated well with the genealogical estimates. Correlations are imperfect most likely because of incomplete records of Native founders’ origin in genealogical data. Although the overall degree of admixture is modest, it contributed to the enrichment of the population diversity and to its demographic stratification. Because admixture greatly varies among regions of Quebec and among individuals, it could have significantly affected the homogeneity of the population, which is of importance in mapping studies, especially when rare genetic susceptibility variants are in play. PMID:23776491

  11. Development of Autonomous Aerobraking - Phase 2

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.

    2013-01-01

    Phase 1 of the Development of Autonomous Aerobraking (AA) Assessment investigated the technical capability of transferring the processes of aerobraking maneuver (ABM) decision-making (currently performed on the ground by an extensive workforce and communicated to the spacecraft via the deep space network) to an efficient flight software algorithm onboard the spacecraft. This document describes Phase 2 of this study, which was a 12-month effort to improve and rigorously test the AA Development Software developed in Phase 1. Aerobraking maneuver; Autonomous Aerobraking; Autonomous Aerobraking Development Software; Deep Space Network; NASA Engineering and Safety Center

  12. NASA integrated vehicle health management technology experiment for X-37

    NASA Astrophysics Data System (ADS)

    Schwabacher, Mark; Samuels, Jeff; Brownston, Lee

    2002-07-01

    The NASA Integrated Vehicle Health Management (IVHM) Technology Experiment for X-37 was intended to run IVHM software on board the X-37 spacecraft. The X-37 is an unpiloted vehicle designed to orbit the Earth for up to 21 days before landing on a runway. The objectives of the experiment were to demonstrate the benefits of in-flight IVHM to the operation of a Reusable Launch Vehicle, to advance the Technology Readiness Level of this IVHM technology within a flight environment, and to demonstrate that the IVHM software could operate on the Vehicle Management Computer. The scope of the experiment was to perform real-time fault detection and isolation for X-37's electrical power system and electro-mechanical actuators. The experiment used Livingstone, a software system that performs diagnosis using a qualitative, model-based reasoning approach that searches system-wide interactions to detect and isolate failures. Two of the challenges we faced were to make this research software more efficient so that it would fit within the limited computational resources that were available to us on the X-37 spacecraft, and to modify it so that it satisfied the X-37's software safety requirements. Although the experiment is currently unfunded, the development effort resulted in major improvements in Livingstone's efficiency and safety. This paper reviews some of the details of the modeling and integration efforts, and some of the lessons that were learned.

  13. System Software Framework for System of Systems Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Peterson, Benjamin L; Thompson, Hiram C.

    2005-01-01

    Project Constellation implements NASA's vision for space exploration to expand human presence in our solar system. The engineering focus of this project is developing a system of systems architecture. This architecture allows for the incremental development of the overall program. Systems can be built and connected in a "Lego style" manner to generate configurations supporting various mission objectives. The development of the avionics or control systems of such a massive project will result in concurrent engineering. Also, each system will have software and the need to communicate with other (possibly heterogeneous) systems. Fortunately, this design problem has already been solved during the creation and evolution of systems such as the Internet and the Department of Defense's successful effort to standardize distributed simulation (now IEEE 1516). The solution relies on the use of a standard layered software framework and a communication protocol. A standard framework and communication protocol is suggested for the development and maintenance of Project Constellation systems. The ARINC 653 standard is a great start for such a common software framework. This paper proposes a common system software framework that uses the Real Time Publish/Subscribe protocol for framework-to-framework communication to extend ARINC 653. It is highly recommended that such a framework be established before development. This is important for the success of concurrent engineering. The framework provides an infrastructure for general system services and is designed for flexibility to support a spiral development effort.

  14. Micro CT based truth estimation of nodule volume

    NASA Astrophysics Data System (ADS)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  15. OntoSoft: A Software Registry for Geosciences

    NASA Astrophysics Data System (ADS)

    Garijo, D.; Gil, Y.

    2017-12-01

    The goal of the EarthCube OntoSoft project is to enable the creation of an ecosystem for software stewardship in geosciences that will empower scientists to manage their software as valuable scientific assets. By sharing software metadata in OntoSoft, scientists enable broader access to that software by other scientists, software professionals, students, and decision makers. Our work to date includes: 1) an ontology for describing scientific software metadata, 2) a distributed scientific software repository that contains more than 750 entries that can be searched and compared across metadata fields, 3) an intelligent user interface that guides scientists to publish software and allows them to crowdsource its corresponding metadata. We have also developed a training program where scientists learn to describe and cite software in their papers in addition to data and provenance, and we are using OntoSoft to show them the benefits of publishing their software metadata. This training program is part of a Geoscience Papers of the Future Initiative, where scientists are reflecting on their current practices, benefits and effort for sharing software and data. This journal paper can be submitted to a Special Section of the AGU Earth and Space Science Journal.

  16. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  17. Integration, Validation, and Application of a PV Snow Coverage Model in SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryberg, David; Freeman, Janine

    2015-09-01

    Due to the increasing deployment of PV systems in snowy climates, there is significant interest in a method capable of estimating PV losses resulting from snow coverage that has been verified for a wide variety of system designs and locations. A scattering of independent snow coverage models have been developed over the last 15 years; however, there has been very little effort spent on verifying these models beyond the system design and location on which they were based. Moreover, none of the major PV modeling software products have incorporated any of these models into their workflow. In response to thismore » deficiency, we have integrated the methodology of the snow model developed in the paper by Marion et al. [1] into the National Renewable Energy Laboratory's (NREL) System Advisor Model (SAM). In this work we describe how the snow model is implemented in SAM and discuss our demonstration of the model's effectiveness at reducing error in annual estimations for two PV arrays. Following this, we use this new functionality in conjunction with a long term historical dataset to estimate average snow losses across the United States for a typical PV system design. The open availability of the snow loss estimation capability in SAM to the PV modeling community, coupled with our results of the nation-wide study, will better equip the industry to accurately estimate PV energy production in areas affected by snowfall.« less

  18. IEEE Computer Society/Software Engineering Institute Software Process Achievement (SPA) Award 2009

    DTIC Science & Technology

    2011-03-01

    capabilities to our GDM. We also introduced software as a service ( SaaS ) as part our technology solutions and have further enhanced our ability to...model PROSPER Infosys production support methodology Q&P quality and productivity R&D research and development SaaS software as a service ... Software Development Life Cycle (SDLC) 23 Table 10: Scientific Estimation Coverage by Service Line 27 CMU/SEI-2011-TR-008 | vi CMU/SEI-2011

  19. The Improvement Cycle: Analyzing Our Experience

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose; Waligora, Sharon

    1996-01-01

    NASA's Software Engineering Laboratory (SEL), one of the earliest pioneers in the areas of software process improvement and measurement, has had a significant impact on the software business at NASA Goddard. At the heart of the SEL's improvement program is a belief that software products can be improved by optimizing the software engineering process used to develop them and a long-term improvement strategy that facilitates small incremental improvements that accumulate into significant gains. As a result of its efforts, the SEL has incrementally reduced development costs by 60%, decreased error rates by 85%, and reduced cycle time by 25%. In this paper, we analyze the SEL's experiences on three major improvement initiatives to better understand the cyclic nature of the improvement process and to understand why some improvements take much longer than others.

  20. Power estimation using simulations for air pollution time-series studies

    PubMed Central

    2012-01-01

    Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599

  1. Power estimation using simulations for air pollution time-series studies.

    PubMed

    Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt

    2012-09-20

    Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.

  2. POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models

    PubMed Central

    Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.

    2014-01-01

    The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516

  3. WILDFIRE IGNITION RESISTANCE ESTIMATOR WIZARD SOFTWARE DEVELOPMENT REPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, M.; Robinson, C.; Gupta, N.

    2012-10-10

    This report describes the development of a software tool, entitled “WildFire Ignition Resistance Estimator Wizard” (WildFIRE Wizard, Version 2.10). This software was developed within the Wildfire Ignition Resistant Home Design (WIRHD) program, sponsored by the U. S. Department of Homeland Security, Science and Technology Directorate, Infrastructure Protection & Disaster Management Division. WildFIRE Wizard is a tool that enables homeowners to take preventive actions that will reduce their home’s vulnerability to wildfire ignition sources (i.e., embers, radiant heat, and direct flame impingement) well in advance of a wildfire event. This report describes the development of the software, its operation, its technicalmore » basis and calculations, and steps taken to verify its performance.« less

  4. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  5. The Federal Aviation Administration Plan for Research, Engineering and Development, 1994

    DTIC Science & Technology

    1994-05-01

    Aeronautical Data Link Communications and (COTS) runway incursion system software will Applications, and 051-130 Airport Safety be demonstrated as a... airport departure and ar- efforts rival scheduling plans that optimize daily traffic flows for long-range flights between major city- * OTFP System to...Expanded HARS planning capabilities to in- aviation dispatchers to develop optimized high clude enhanced communications software for altitude flight

  6. Hardware Evolution of Closed-Loop Controller Designs

    NASA Technical Reports Server (NTRS)

    Gwaltney, David; Ferguson, Ian

    2002-01-01

    Poster presentation will outline on-going efforts at NASA, MSFC to employ various Evolvable Hardware experimental platforms in the evolution of digital and analog circuitry for application to automatic control. Included will be information concerning the application of commercially available hardware and software along with the use of the JPL developed FPTA2 integrated circuit and supporting JPL developed software. Results to date will be presented.

  7. Virtual Exercise Training Software System

    NASA Technical Reports Server (NTRS)

    Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.

    2018-01-01

    The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.

  8. A Roadmap for HEP Software and Computing R&D for the 2020s

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, Antonio Augusto, Jr; et al.

    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to preparemore » for this software upgrade.« less

  9. General-Purpose Front End for Real-Time Data Processing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  10. A communication channel model of the software process

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1988-01-01

    Reported here is beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size), the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. Also derived is an upper bound to productivity that shows that software reuse is the only means than can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.

  11. A communication channel model of the software process

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1988-01-01

    Beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds is discussed. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size) the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. An upper bound to productivity is derived that shows that software reuse is the only means that can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.

  12. Verification Testing: Meet User Needs Figure of Merit

    NASA Technical Reports Server (NTRS)

    Kelly, Bryan W.; Welch, Bryan W.

    2017-01-01

    Verification is the process through which Modeling and Simulation(M&S) software goes to ensure that it has been rigorously tested and debugged for its intended use. Validation confirms that said software accurately models and represents the real world system. Credibility gives an assessment of the development and testing effort that the software has gone through as well as how accurate and reliable test results are. Together, these three components form Verification, Validation, and Credibility(VV&C), the process by which all NASA modeling software is to be tested to ensure that it is ready for implementation. NASA created this process following the CAIB (Columbia Accident Investigation Board) report seeking to understand the reasons the Columbia space shuttle failed during reentry. The reports conclusion was that the accident was fully avoidable, however, among other issues, the necessary data to make an informed decision was not there and the result was complete loss of the shuttle and crew. In an effort to mitigate this problem, NASA put out their Standard for Models and Simulations, currently in version NASA-STD-7009A, in which they detailed their recommendations, requirements and rationale for the different components of VV&C. They did this with the intention that it would allow for people receiving MS software to clearly understand and have data from the past development effort. This in turn would allow the people who had not worked with the MS software before to move forward with greater confidence and efficiency in their work. This particular project looks to perform Verification on several MATLAB (Registered Trademark)(The MathWorks, Inc.) scripts that will be later implemented in a website interface. It seeks to take note and define the limits of operation, the units and significance, and the expected datatype and format of the inputs and outputs of each of the scripts. This is intended to prevent the code from attempting to make incorrect or impossible calculations. Additionally, this project will look at the coding generally and note inconsistencies, redundancies, and other aspects that may become problematic or slow down the codes run time. Certain scripts lacking in documentation also will be commented and cataloged.

  13. Statistical Theory for the "RCT-YES" Software: Design-Based Causal Inference for RCTs. NCEE 2015-4011

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2015-01-01

    This report presents the statistical theory underlying the "RCT-YES" software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal…

  14. Users guide for STHARVEST: software to estimate the cost of harvesting small timber.

    Treesearch

    Roger D. Fight; Xiaoshan Zhang; Bruce R. Hartsough

    2003-01-01

    The STHARVEST computer application is Windows-based, public-domain software used to estimate costs for harvesting small-diameter stands or the small-diameter component of a mixed-sized stand. The equipment production rates were developed from existing studies. Equipment operating cost rates were based on November 1998 prices for new equipment and wage rates for the...

  15. User guide for HCR Estimator 2.0: software to calculate cost and revenue thresholds for harvesting small-diameter ponderosa pine.

    Treesearch

    Dennis R. Becker; Debra Larson; Eini C. Lowell; Robert B. Rummer

    2008-01-01

    The HCR (Harvest Cost-Revenue) Estimator is engineering and financial analysis software used to evaluate stand-level financial thresholds for harvesting small-diameter ponderosa pine (Pinus ponderosa Dougl. ex Laws.) in the Southwest United States. The Windows-based program helps contractors and planners to identify costs associated with tree...

  16. Estimation of radiation dose to patients from (18) FDG whole body PET/CT investigations using dynamic PET scan protocol.

    PubMed

    Kaushik, Aruna; Jaimini, Abhinav; Tripathi, Madhavi; D'Souza, Maria; Sharma, Rajnish; Mondal, Anupam; Mishra, Anil K; Dwarakanath, Bilikere S

    2015-12-01

    There is a growing concern over the radiation exposure of patients from undergoing 18FDG PET/CT (18F-fluorodeoxyglucose positron emission tomography/computed tomography) whole body investigations. The aim of the present study was to study the kinetics of 18FDG distributions and estimate the radiation dose received by patients undergoing 18FDG whole body PET/CT investigations. Dynamic PET scans in different regions of the body were performed in 49 patients so as to measure percentage uptake of 18FDG in brain, liver, spleen, adrenals, kidneys and stomach. The residence time in these organs was calculated and radiation dose was estimated using OLINDA software. The radiation dose from the CT component was computed using the software CT-Expo and measured using computed tomography dose index (CTDI) phantom and ionization chamber. As per the clinical protocol, the patients were refrained from eating and drinking for a minimum period of 4 h prior to the study. The estimated residence time in males was 0.196 h (brain), 0.09 h (liver), 0.007 h (spleen), 0.0006 h (adrenals), 0.013 h (kidneys) and 0.005 h (stomach) whereas it was 0.189 h (brain), 0.11 h (liver), 0.01 h (spleen), 0.0007 h (adrenals), 0.02 h (kidneys) and 0.004 h (stomach) in females. The effective dose was found to be 0.020 mSv/MBq in males and 0.025 mSv/MBq in females from internally administered 18FDG and 6.8 mSv in males and 7.9 mSv in females from the CT component. For an administered activity of 370 MBq of 18FDG, the effective dose from PET/CT investigations was estimated to be 14.2 mSv in males and 17.2 mSv in females. The present results did not demonstrate significant difference in the kinetics of 18FDG distribution in male and female patients. The estimated PET/CT doses were found to be higher than many other conventional diagnostic radiology examinations suggesting that all efforts should be made to clinically justify and carefully weigh the risk-benefit ratios prior to every 18FDG whole body PET/CT scan.

  17. Estimated flood-inundation maps for Cowskin Creek in western Wichita, Kansas

    USGS Publications Warehouse

    Studley, Seth E.

    2003-01-01

    The October 31, 1998, flood on Cowskin Creek in western Wichita, Kansas, caused millions of dollars in damages. Emergency management personnel and flood mitigation teams had difficulty in efficiently identifying areas affected by the flooding, and no warning was given to residents because flood-inundation information was not available. To provide detailed information about future flooding on Cowskin Creek, high-resolution estimated flood-inundation maps were developed using geographic information system technology and advanced hydraulic analysis. Two-foot-interval land-surface elevation data from a 1996 flood insurance study were used to create a three-dimensional topographic representation of the study area for hydraulic analysis. The data computed from the hydraulic analyses were converted into geographic information system format with software from the U.S. Army Corps of Engineers' Hydrologic Engineering Center. The results were overlaid on the three-dimensional topographic representation of the study area to produce maps of estimated flood-inundation areas and estimated depths of water in the inundated areas for 1-foot increments on the basis of stream stage at an index streamflow-gaging station. A Web site (http://ks.water.usgs.gov/Kansas/cowskin.floodwatch) was developed to provide the public with information pertaining to flooding in the study area. The Web site shows graphs of the real-time streamflow data for U.S. Geological Survey gaging stations in the area and monitors the National Weather Service Arkansas-Red Basin River Forecast Center for Cowskin Creek flood-forecast information. When a flood is forecast for the Cowskin Creek Basin, an estimated flood-inundation map is displayed for the stream stage closest to the National Weather Service's forecasted peak stage. Users of the Web site are able to view the estimated flood-inundation maps for selected stages at any time and to access information about this report and about flooding in general. Flood recovery teams also have the ability to view the estimated flood-inundation map pertaining to the most recent flood. The availability of these maps and the ability to monitor the real-time stream stage through the U.S. Geological Survey Web site provide emergency management personnel and residents with information that is critical for evacuation and rescue efforts in the event of a flood as well as for post-flood recovery efforts.

  18. Software for Optimizing Quality Assurance of Other Software

    NASA Technical Reports Server (NTRS)

    Feather, Martin; Cornford, Steven; Menzies, Tim

    2004-01-01

    Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software.

  19. FRAMES-2.0 Software System: Frames 2.0 Pest Integration (F2PEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castleton, Karl J.; Meyer, Philip D.

    2009-06-17

    The implementation of the FRAMES 2.0 F2PEST module is described, including requirements, design, and specifications of the software. This module integrates the PEST parameter estimation software within the FRAMES 2.0 environmental modeling framework. A test case is presented.

  20. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  1. Calibration of a COTS Integration Cost Model Using Local Project Data

    NASA Technical Reports Server (NTRS)

    Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David

    1997-01-01

    The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.

  2. Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frothingham, David; Barker, Michelle; Buechi, Steve

    2013-07-01

    Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recoverymore » and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil volume estimate and the associated contingency costs. (authors)« less

  3. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  4. Software engineering project management - A state-of-the-art report

    NASA Technical Reports Server (NTRS)

    Thayer, R. H.; Lehman, J. H.

    1977-01-01

    The management of software engineering projects in the aerospace industry was investigated. The survey assessed such features as contract type, specification preparation techniques, software documentation required by customers, planning and cost-estimating, quality control, the use of advanced program practices, software tools and test procedures, the education levels of project managers, programmers and analysts, work assignment, automatic software monitoring capabilities, design and coding reviews, production times, success rates, and organizational structure of the projects.

  5. Risk analysis and management

    NASA Technical Reports Server (NTRS)

    Smith, H. E.

    1990-01-01

    Present software development accomplishments are indicative of the emerging interest in and increasing efforts to provide risk assessment backbone tools in the manned spacecraft engineering community. There are indications that similar efforts are underway in the chemical processes industry and are probably being planned for other high risk ground base environments. It appears that complex flight systems intended for extended manned planetary exploration will drive this technology.

  6. 67 FR 24495 - United States v. Microsoft Corporation; Public Comments; Notice (MTC-00003461 - MTC-00007629)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2002-05-03

    ... (web-serving software), Linux, Perl, and those who are building a compatible & free version of MS`s..., Argument from Design Argument from Design-Web & Multimedia [email protected] http://www.ardes.com MTC-00003464... organization could be a good target for this effort. Their web address is http:// www.gnu.org/. This effort...

  7. Manager's handbook for software development, revision 1

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Methods and aids for the management of software development projects are presented. The recommendations are based on analyses and experiences of the Software Engineering Laboratory (SEL) with flight dynamics software development. The management aspects of the following subjects are described: organizing the project, producing a development plan, estimating costs, scheduling, staffing, preparing deliverable documents, using management tools, monitoring the project, conducting reviews, auditing, testing, and certifying.

  8. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  9. Experimental research control software system

    NASA Astrophysics Data System (ADS)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  10. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  11. Estimation of Organ Absorbed Doses in Patients from 99mTc-diphosphonate Using the Data of MIRDose Software

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Cheki, Mohsen; Moslehi, Masoud

    2012-01-01

    The purpose of this study was to compare estimation of radiation absorbed doses to patients following bone scans with technetium-99m-labeled methylene diphosphonate (MDP) with the estimates given in MIRDose software. In this study, each patient was injected 25 mCi of 99mTc-MDP. Whole-body images from thirty patients were acquired by gamma camera at 10, 60, 90, 180 minutes after 99mTc-MDP injection. To determine the amount of activity in each organ, conjugate view method was applied on images. MIRD equation was then used to estimate absorbed doses in different organs of patients. At the end, absorbed dose values obtained in this study were compared with the data of MIRDose software. The absorbed doses per unit of injected activity (mGy/MBq × 10–4) for liver, kidneys, bladder wall and spleen were 3.86 ± 1.1, 38.73 ± 4.7, 4.16 ± 1.8 and 3.91 ± 1.3, respectively. The results of this study may be useful to estimate the amount of activity that can be administered to the patient and also showed that methods used in the study for absorbed dose calculation is in good agreement with the data of MIRDose software and it is possible to use by a clinician. PMID:23724374

  12. Standard Area Diagrams for Aiding Severity Estimation: Scientometrics, Pathosystems, and Methodological Trends in the Last 25 Years.

    PubMed

    Del Ponte, Emerson M; Pethybridge, Sarah J; Bock, Clive H; Michereff, Sami J; Machado, Franklin J; Spolti, Piérri

    2017-10-01

    Standard area diagrams (SAD) have long been used as a tool to aid the estimation of plant disease severity, an essential variable in phytopathometry. Formal validation of SAD was not considered prior to the early 1990s, when considerable effort began to be invested developing SAD and assessing their value for improving accuracy of estimates of disease severity in many pathosystems. Peer-reviewed literature post-1990 was identified, selected, and cataloged in bibliographic software for further scrutiny and extraction of scientometric, pathosystem-related, and methodological-related data. In total, 105 studies (127 SAD) were found and authored by 327 researchers from 10 countries, mainly from Brazil. The six most prolific authors published at least seven studies. The scientific impact of a SAD article, based on annual citations after publication year, was affected by disease significance, the journal's impact factor, and methodological innovation. The reviewed SAD encompassed 48 crops and 103 unique diseases across a range of plant organs. Severity was quantified largely by image analysis software such as QUANT, APS-Assess, or a LI-COR leaf area meter. The most typical SAD comprised five to eight black-and-white drawings of leaf diagrams, with severity increasing nonlinearly. However, there was a trend toward using true-color photographs or stylized representations in a range of color combinations and more linear (equally spaced) increments of severity. A two-step SAD validation approach was used in 78 of 105 studies for which linear regression was the preferred method but a trend toward using Lin's correlation concordance analysis and hypothesis tests to detect the effect of SAD on accuracy was apparent. Reliability measures, when obtained, mainly considered variation among rather than within raters. The implications of the findings and knowledge gaps are discussed. A list of best practices for designing and implementing SAD and a website called SADBank for hosting SAD research data are proposed.

  13. Real Time System for Practical Acoustic Monitoring of Global Ocean Temperature. Volume 3

    DTIC Science & Technology

    1994-06-30

    signal processing software to the SSAR. This software performs Doppler correction , circulating sums, matched filtering and pulse compression, estimation...Doppler correction , circulating sums, matched filtering and pulse compression, estimation of multipath arrival angle, and peak- picking. At the... geometrica , sound speed, and focuing region sAles to the acoustic wavelengths Our work on this problem is based on an oceanographic application. To

  14. Which factors affect software projects maintenance cost more?

    PubMed

    Dehaghani, Sayed Mehdi Hejazi; Hajrahimi, Nafiseh

    2013-03-01

    The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors. In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011. Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. "Project" was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.

  15. Automated Operations Development for Advanced Exploration Systems

    NASA Technical Reports Server (NTRS)

    Haddock, Angie; Stetson, Howard K.

    2012-01-01

    Automated space operations command and control software development and its implementation must be an integral part of the vehicle design effort. The software design must encompass autonomous fault detection, isolation, recovery capabilities and also provide single button intelligent functions for the crew. Development, operations and safety approval experience with the Timeliner system on-board the International Space Station (ISS), which provided autonomous monitoring with response and single command functionality of payload systems, can be built upon for future automated operations as the ISS Payload effort was the first and only autonomous command and control system to be in continuous execution (6 years), 24 hours a day, 7 days a week within a crewed spacecraft environment. Utilizing proven capabilities from the ISS Higher Active Logic (HAL) System [1] , along with the execution component design from within the HAL 9000 Space Operating System [2] , this design paper will detail the initial HAL System software architecture and interfaces as applied to NASA s Habitat Demonstration Unit (HDU) in support of the Advanced Exploration Systems, Autonomous Mission Operations project. The development and implementation of integrated simulators within this development effort will also be detailed and is the first step in verifying the HAL 9000 Integrated Test-Bed Component [2] designs effectiveness. This design paper will conclude with a summary of the current development status and future development goals as it pertains to automated command and control for the HDU.

  16. Automated Operations Development for Advanced Exploration Systems

    NASA Technical Reports Server (NTRS)

    Haddock, Angie T.; Stetson, Howard

    2012-01-01

    Automated space operations command and control software development and its implementation must be an integral part of the vehicle design effort. The software design must encompass autonomous fault detection, isolation, recovery capabilities and also provide "single button" intelligent functions for the crew. Development, operations and safety approval experience with the Timeliner system onboard the International Space Station (ISS), which provided autonomous monitoring with response and single command functionality of payload systems, can be built upon for future automated operations as the ISS Payload effort was the first and only autonomous command and control system to be in continuous execution (6 years), 24 hours a day, 7 days a week within a crewed spacecraft environment. Utilizing proven capabilities from the ISS Higher Active Logic (HAL) System, along with the execution component design from within the HAL 9000 Space Operating System, this design paper will detail the initial HAL System software architecture and interfaces as applied to NASA's Habitat Demonstration Unit (HDU) in support of the Advanced Exploration Systems, Autonomous Mission Operations project. The development and implementation of integrated simulators within this development effort will also be detailed and is the first step in verifying the HAL 9000 Integrated Test-Bed Component [2] designs effectiveness. This design paper will conclude with a summary of the current development status and future development goals as it pertains to automated command and control for the HDU.

  17. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    NASA Astrophysics Data System (ADS)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  18. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  19. A Unique Software System For Simulation-to-Flight Research

    NASA Technical Reports Server (NTRS)

    Chung, Victoria I.; Hutchinson, Brian K.

    2001-01-01

    "Simulation-to-Flight" is a research development concept to reduce costs and increase testing efficiency of future major aeronautical research efforts at NASA. The simulation-to-flight concept is achieved by using common software and hardware, procedures, and processes for both piloted-simulation and flight testing. This concept was applied to the design and development of two full-size transport simulators, a research system installed on a NASA B-757 airplane, and two supporting laboratories. This paper describes the software system that supports the simulation-to-flight facilities. Examples of various simulation-to-flight experimental applications were also provided.

  20. Formal specification and verification of Ada software

    NASA Technical Reports Server (NTRS)

    Hird, Geoffrey R.

    1991-01-01

    The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.

Top