Sample records for conventional software development

  1. On the engineering of crucial software

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.; Knight, J. C.; Gregory, S. T.

    1983-01-01

    The various aspects of the conventional software development cycle are examined. This cycle was the basis of the augmented approach contained in the original grant proposal. This cycle was found inadequate for crucial software development, and the justification for this opinion is presented. Several possible enhancements to the conventional software cycle are discussed. Software fault tolerance, a possible enhancement of major importance, is discussed separately. Formal verification using mathematical proof is considered. Automatic programming is a radical alternative to the conventional cycle and is discussed. Recommendations for a comprehensive approach are presented, and various experiments which could be conducted in AIRLAB are described.

  2. Pilot Study of an Open-source Image Analysis Software for Automated Screening of Conventional Cervical Smears.

    PubMed

    Sanyal, Parikshit; Ganguli, Prosenjit; Barui, Sanghita; Deb, Prabal

    2018-01-01

    The Pap stained cervical smear is a screening tool for cervical cancer. Commercial systems are used for automated screening of liquid based cervical smears. However, there is no image analysis software used for conventional cervical smears. The aim of this study was to develop and test the diagnostic accuracy of a software for analysis of conventional smears. The software was developed using Python programming language and open source libraries. It was standardized with images from Bethesda Interobserver Reproducibility Project. One hundred and thirty images from smears which were reported Negative for Intraepithelial Lesion or Malignancy (NILM), and 45 images where some abnormality has been reported, were collected from the archives of the hospital. The software was then tested on the images. The software was able to segregate images based on overall nuclear: cytoplasmic ratio, coefficient of variation (CV) in nuclear size, nuclear membrane irregularity, and clustering. 68.88% of abnormal images were flagged by the software, as well as 19.23% of NILM images. The major difficulties faced were segmentation of overlapping cell clusters and separation of neutrophils. The software shows potential as a screening tool for conventional cervical smears; however, further refinement in technique is required.

  3. The Stability and Validity of Automated Vocal Analysis in Preverbal Preschoolers With Autism Spectrum Disorder

    PubMed Central

    Woynaroski, Tiffany; Oller, D. Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul

    2017-01-01

    Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. PMID:27459107

  4. The cost of software fault tolerance

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1982-01-01

    The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.

  5. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…

  6. Turbine Aeration Design Software for Mitigating Adverse Environmental Impacts Resulting From Conventional Hydropower Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulliver, John S.

    2015-03-01

    Conventional hydropower turbine aeration test-bed for computational routines and software tools for improving environmental mitigation technologies for conventional hydropower systems. In achieving this goal, we have partnered with Alstom, a global leader in energy technology development and United States power generation, with additional funding from the Initiative for Renewable Energy and the Environment (IREE) and the College of Science and Engineering (CSE) at the UMN

  7. The stability and validity of automated vocal analysis in preverbal preschoolers with autism spectrum disorder.

    PubMed

    Woynaroski, Tiffany; Oller, D Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul

    2017-03-01

    Theory and research suggest that vocal development predicts "useful speech" in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently "in development" and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. Autism Res 2017, 10: 508-519. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Reuse-Driven Software Processes Guidebook. Version 02.00.03

    DTIC Science & Technology

    1993-11-01

    a required sys - tem without unduly constraining the details of the solution. The Naval Research Laboratory Software Cost Reduction project developed...conventional manner. The emphasis is still on the development of "one-of-a-kind" sys - tems and the phased completion and review of corresponding...Application Engineering to improve the life-cycle productivity of Sy - 21 OVM ftrdauntals of Syatbes the total software development enterprise. The

  9. Knowledge-based system V and V in the Space Station Freedom program

    NASA Technical Reports Server (NTRS)

    Kelley, Keith; Hamilton, David; Culbert, Chris

    1992-01-01

    Knowledge Based Systems (KBS's) are expected to be heavily used in the Space Station Freedom Program (SSFP). Although SSFP Verification and Validation (V&V) requirements are based on the latest state-of-the-practice in software engineering technology, they may be insufficient for Knowledge Based Systems (KBS's); it is widely stated that there are differences in both approach and execution between KBS V&V and conventional software V&V. In order to better understand this issue, we have surveyed and/or interviewed developers from sixty expert system projects in order to understand the differences and difficulties in KBS V&V. We have used this survey results to analyze the SSFP V&V requirements for conventional software in order to determine which specific requirements are inappropriate for KBS V&V and why they are inappropriate. Further work will result in a set of recommendations that can be used either as guidelines for applying conventional software V&V requirements to KBS's or as modifications to extend the existing SSFP conventional software V&V requirements to include KBS requirements. The results of this work are significant to many projects, in addition to SSFP, which will involve KBS's.

  10. Computer Software Information for Educators: A New Approach to Portrayal of Student Tryout Data.

    ERIC Educational Resources Information Center

    Della-Piana, Gabriel; Della-Piana, Connie Kubo

    1984-01-01

    Suggests conventional evaluation reports are inappropriate for what needs to be portrayed for software users concerned with acquisition, software adaptation, design and development, and teacher implementation decisions. Two forms for evaluating information are described and illustrated--a narrative portrayal form and a design and development…

  11. A data model of the Climate and Forecast metadata conventions (CF-1.6) with a software implementation (cf-python v2.1)

    NASA Astrophysics Data System (ADS)

    Hassell, David; Gregory, Jonathan; Blower, Jon; Lawrence, Bryan N.; Taylor, Karl E.

    2017-12-01

    The CF (Climate and Forecast) metadata conventions are designed to promote the creation, processing, and sharing of climate and forecasting data using Network Common Data Form (netCDF) files and libraries. The CF conventions provide a description of the physical meaning of data and of their spatial and temporal properties, but they depend on the netCDF file encoding which can currently only be fully understood and interpreted by someone familiar with the rules and relationships specified in the conventions documentation. To aid in development of CF-compliant software and to capture with a minimal set of elements all of the information contained in the CF conventions, we propose a formal data model for CF which is independent of netCDF and describes all possible CF-compliant data. Because such data will often be analysed and visualised using software based on other data models, we compare our CF data model with the ISO 19123 coverage model, the Open Geospatial Consortium CF netCDF standard, and the Unidata Common Data Model. To demonstrate that this CF data model can in fact be implemented, we present cf-python, a Python software library that conforms to the model and can manipulate any CF-compliant dataset.

  12. An Update on Design Tools for Optimization of CMC 3D Fiber Architectures

    NASA Technical Reports Server (NTRS)

    Lang, J.; DiCarlo, J.

    2012-01-01

    Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.

  13. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  14. Tailoring Software Inspections for Aspect-Oriented Programming

    ERIC Educational Resources Information Center

    Watkins, Charlette Ward

    2009-01-01

    Aspect-Oriented Software Development (AOSD) is a new approach that addresses limitations inherent in conventional programming, especially the principle of separation of concerns by emphasizing the encapsulation and modularization of crosscutting concerns through a new abstraction, the "aspect." Aspect-oriented programming is an emerging AOSD…

  15. Software safety - A user's practical perspective

    NASA Technical Reports Server (NTRS)

    Dunn, William R.; Corliss, Lloyd D.

    1990-01-01

    Software safety assurance philosophy and practices at the NASA Ames are discussed. It is shown that, to be safe, software must be error-free. Software developments on two digital flight control systems and two ground facility systems are examined, including the overall system and software organization and function, the software-safety issues, and their resolution. The effectiveness of safety assurance methods is discussed, including conventional life-cycle practices, verification and validation testing, software safety analysis, and formal design methods. It is concluded (1) that a practical software safety technology does not yet exist, (2) that it is unlikely that a set of general-purpose analytical techniques can be developed for proving that software is safe, and (3) that successful software safety-assurance practices will have to take into account the detailed design processes employed and show that the software will execute correctly under all possible conditions.

  16. AVE-SESAME program for the REEDA System

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1981-01-01

    The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.

  17. Developing open-source codes for electromagnetic geophysics using industry support

    NASA Astrophysics Data System (ADS)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  18. Open Architecture SDR for Space

    NASA Technical Reports Server (NTRS)

    Smith, Carl; Long, Chris; Liebetreu, John; Reinhart, Richard C.

    2005-01-01

    This paper describes an open-architecture SDR (software defined radio) infrastructure that is suitable for space-based operations (Space-SDR). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and significantly less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, reduced obsolescence, interoperability, and software re-use. Significant progress has been recorded on developments like the Joint Tactical Radio System (JSTRS) Software Communication Architecture (SCA), which is oriented toward reconfigurable radios for defense forces operating in multiple theaters of engagement. The JTRS-SCA presents a consistent software interface for waveform development, and facilitates interoperability, waveform portability, software re-use, and technology evolution.

  19. Software engineering with application-specific languages

    NASA Technical Reports Server (NTRS)

    Campbell, David J.; Barker, Linda; Mitchell, Deborah; Pollack, Robert H.

    1993-01-01

    Application-Specific Languages (ASL's) are small, special-purpose languages that are targeted to solve a specific class of problems. Using ASL's on software development projects can provide considerable cost savings, reduce risk, and enhance quality and reliability. ASL's provide a platform for reuse within a project or across many projects and enable less-experienced programmers to tap into the expertise of application-area experts. ASL's have been used on several software development projects for the Space Shuttle Program. On these projects, the use of ASL's resulted in considerable cost savings over conventional development techniques. Two of these projects are described.

  20. Quality measures and assurance for AI (Artificial Intelligence) software

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1988-01-01

    This report is concerned with the application of software quality and evaluation measures to AI software and, more broadly, with the question of quality assurance for AI software. Considered are not only the metrics that attempt to measure some aspect of software quality, but also the methodologies and techniques (such as systematic testing) that attempt to improve some dimension of quality, without necessarily quantifying the extent of the improvement. The report is divided into three parts Part 1 reviews existing software quality measures, i.e., those that have been developed for, and applied to, conventional software. Part 2 considers the characteristics of AI software, the applicability and potential utility of measures and techniques identified in the first part, and reviews those few methods developed specifically for AI software. Part 3 presents an assessment and recommendations for the further exploration of this important area.

  1. Expert system verification and validation study: ES V/V Workshop

    NASA Technical Reports Server (NTRS)

    French, Scott; Hamilton, David

    1992-01-01

    The primary purpose of this document is to build a foundation for applying principles of verification and validation (V&V) of expert systems. To achieve this, some V&V as applied to conventionally implemented software is required. Part one will discuss the background of V&V from the perspective of (1) what is V&V of software and (2) V&V's role in developing software. Part one will also overview some common analysis techniques that are applied when performing V&V of software. All of these materials will be presented based on the assumption that the reader has little or no background in V&V or in developing procedural software. The primary purpose of part two is to explain the major techniques that have been developed for V&V of expert systems.

  2. Extreme Programming: Maestro Style

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.

  3. A knowledge based software engineering environment testbed

    NASA Technical Reports Server (NTRS)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  4. Integrated Software Development System/Higher Order Software Conceptual Description (ISDS/HOS)

    DTIC Science & Technology

    1976-11-01

    Structured Flowchart Conventions 270 6.3.5.3 Design Diagram Notation 273 xii HIGHER ORDER SOFTWARE, INC. 843 MASSACHUSETTS AVENUE. CAMBRIDGE, MASSACHUSETTS...associated with the process steps. They also reference other HIPO diagrams as well an non-HIPO documentation such as flowcharts or decision tables of...syntax that is easy to learn and must provide the novice with some prompting to help him avoid classic beginner errors. Desirable editing capabilities

  5. Guidelines for development structured FORTRAN programs

    NASA Technical Reports Server (NTRS)

    Earnest, B. M.

    1984-01-01

    Computer programming and coding standards were compiled to serve as guidelines for the uniform writing of FORTRAN 77 programs at NASA Langley. Software development philosophy, documentation, general coding conventions, and specific FORTRAN coding constraints are discussed.

  6. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  7. Effects of Using Requirements Catalogs on Effectiveness and Productivity of Requirements Specification in a Software Project Management Course

    ERIC Educational Resources Information Center

    Fernández-Alemán, José Luis; Carrillo-de-Gea, Juan Manuel; Meca, Joaquín Vidal; Ros, Joaquín Nicolás; Toval, Ambrosio; Idri, Ali

    2016-01-01

    This paper presents the results of two educational experiments carried out to determine whether the process of specifying requirements (catalog-based reuse as opposed to conventional specification) has an impact on effectiveness and productivity in co-located and distributed software development environments. The participants in the experiments…

  8. Time-lapse systems for embryo incubation and assessment in assisted reproduction.

    PubMed

    Armstrong, Sarah; Bhide, Priya; Jordan, Vanessa; Pacey, Allan; Farquhar, Cindy

    2018-05-25

    Embryo incubation and assessment is a vital step in assisted reproductive technology (ART). Traditionally, embryo assessment has been achieved by removing embryos from a conventional incubator daily for quality assessment by an embryologist, under a light microscope. Over recent years time-lapse systems have been developed which can take digital images of embryos at frequent time intervals. This allows embryologists, with or without the assistance of embryo selection software, to assess the quality of the embryos without physically removing them from the incubator.The potential advantages of a time-lapse system (TLS) include the ability to maintain a stable culture environment, therefore limiting the exposure of embryos to changes in gas composition, temperature and movement. A TLS has the potential advantage of improving embryo selection for ART treatment by utilising additional information gained through continuously monitoring embryo development. Use of a TLS often adds significant extra cost onto an in vitro fertilisation (IVF) cycle. To determine the effect of a TLS compared to conventional embryo incubation and assessment on clinical outcomes in couples undergoing ART. We used standard methodology recommended by Cochrane. We searched the Cochrane Gynaecology and Fertility (CGF) Group trials register, CENTRAL, MEDLINE, Embase, CINAHL and two trials registers on 2 August 2017. We included randomised controlled trials (RCTs) in the following comparisons: comparing a TLS, with or without embryo selection software, versus conventional incubation with morphological assessment; and TLS with embryo selection software versus TLS without embryo selection software among couples undergoing ART. We used standard methodological procedures recommended by Cochrane. The primary review outcomes were live birth, miscarriage and stillbirth. Secondary outcomes were clinical pregnancy and cumulative clinical pregnancy. We reported quality of the evidence for important outcomes using GRADE methodology. We made the following comparisons.TLS with conventional morphological assessment of still TLS images versus conventional incubation and assessmentTLS utilising embryo selection software versus TLS with conventional morphological assessment of still TLS images TLS utilising embryo selection software versus conventional incubation and assessment MAIN RESULTS: We included eight RCTs (N = 2303 women). The quality of the evidence ranged from very low to moderate. The main limitations were imprecision and risk of bias associated with lack of blinding of participants and researchers, and indirectness secondary to significant heterogeneity between interventions in some studies. There were no data on cumulative clinical pregnancy.TLS with conventional morphological assessment of still TLS images versus conventional incubation and assessmentThere is no evidence of a difference between the interventions in terms of live birth rates (odds ratio (OR) 0.73, 95% CI 0.47 to 1.13, 2 RCTs, N = 440, I 2 = 11% , moderate-quality evidence) and may also be no evidence of difference in miscarriage rates (OR 2.25, 95% CI 0.84 to 6.02, 2 RCTs, N = 440, I 2 = 44%, low-quality evidence). The evidence suggests that if the live birth rate associated with conventional incubation and assessment is 33%, the rate with use of TLS with conventional morphological assessment of still TLS images is between 19% and 36%; and that if the miscarriage rate with conventional incubation is 3%, the rate associated with conventional morphological assessment of still TLS images would be between 3% and 18%. There is no evidence of a difference between the interventions in the stillbirth rate (OR 1.00, 95% CI 0.13 to 7.49, 1 RCT, N = 76, low-quality evidence). There is no evidence of a difference between the interventions in clinical pregnancy rates (OR 0.88, 95% CI 0.58 to 1.33, 3 RCTs, N = 489, I 2 = 0%, moderate-quality evidence).TLS utilising embryo selection software versus TLS with conventional morphological assessment of still TLS imagesNo data were available on live birth or stillbirth. We are uncertain whether TLS utilising embryo selection software influences miscarriage rates (OR 1.39, 95% CI 0.64 to 3.01, 2 RCTs, N = 463, I 2 = 0%, very low-quality evidence) and there may be no difference in clinical pregnancy rates (OR 0.97, 95% CI 0.67 to 1.42, 2 RCTs, N = 463, I 2 = 0%, low-quality evidence). The evidence suggests that if the miscarriage rate associated with assessment of still TLS images is 5%, the rate with embryo selection software would be between 3% and 14%.TLS utilising embryo selection software versus conventional incubation and assessmentThere is no evidence of a difference between TLS utilising embryo selection software and conventional incubation improving live birth rates (OR 1.21, 95% CI 0.96 to 1.54, 2 RCTs, N = 1017, I 2 = 0%, very low-quality evidence). We are uncertain whether TLS influences miscarriage rates (OR 0.73, 95% CI 0.49 to 1.08, 3 RCTs, N = 1351, I 2 = 0%, very low-quality evidence). The evidence suggests that if the live birth rate associated with no TLS is 38%, the rate with use of conventional incubation would be between 36% and 58%, and that if miscarriage rate with conventional incubation is 9%, the rate associated with TLS would be between 4% and 10%. No data on stillbirths were available. It was uncertain whether the intervention influenced clinical pregnancy rates (OR 1.17, 95% CI 0.94 to 1.45, 3 RCTs, N = 1351, I 2 = 42%, very low-quality evidence). There is insufficient evidence of differences in live birth, miscarriage, stillbirth or clinical pregnancy to choose between TLS, with or without embryo selection software, and conventional incubation. The studies were at high risk of bias for randomisation and allocation concealment, the result should be interpreted with extreme caution.

  9. SIDS-toADF File Mapping Manual

    NASA Technical Reports Server (NTRS)

    McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

    2002-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of universal software to read and write the data.

  10. Meta-analysis of Odds Ratios: Current Good Practices

    PubMed Central

    Chang, Bei-Hung; Hoaglin, David C.

    2016-01-01

    Background Many systematic reviews of randomized clinical trials lead to meta-analyses of odds ratios. The customary methods of estimating an overall odds ratio involve weighted averages of the individual trials’ estimates of the logarithm of the odds ratio. That approach, however, has several shortcomings, arising from assumptions and approximations, that render the results unreliable. Although the problems have been documented in the literature for many years, the conventional methods persist in software and applications. A well-developed alternative approach avoids the approximations by working directly with the numbers of subjects and events in the arms of the individual trials. Objective We aim to raise awareness of methods that avoid the conventional approximations, can be applied with widely available software, and produce more-reliable results. Methods We summarize the fixed-effect and random-effects approaches to meta-analysis; describe conventional, approximate methods and alternative methods; apply the methods in a meta-analysis of 19 randomized trials of endoscopic sclerotherapy in patients with cirrhosis and esophagogastric varices; and compare the results. We demonstrate the use of SAS, Stata, and R software for the analysis. Results In the example, point estimates and confidence intervals for the overall log-odds-ratio differ between the conventional and alternative methods, in ways that can affect inferences. Programming is straightforward in the three software packages; an appendix gives the details. Conclusions The modest additional programming required should not be an obstacle to adoption of the alternative methods. Because their results are unreliable, use of the conventional methods for meta-analysis of odds ratios should be discontinued. PMID:28169977

  11. pyam: Python Implementation of YaM

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    pyam is a software development framework with tools for facilitating the rapid development of software in a concurrent software development environment. pyam provides solutions for development challenges associated with software reuse, managing multiple software configurations, developing software product lines, and multiple platform development and build management. pyam uses release-early, release-often development cycles to allow developers to integrate their changes incrementally into the system on a continual basis. It facilitates the creation and merging of branches to support the isolated development of immature software to avoid impacting the stability of the development effort. It uses modules and packages to organize and share software across multiple software products, and uses the concepts of link and work modules to reduce sandbox setup times even when the code-base is large. One sidebenefit is the enforcement of a strong module-level encapsulation of a module s functionality and interface. This increases design transparency, system stability, and software reuse. pyam is written in Python and is organized as a set of utilities on top of the open source SVN software version control package. All development software is organized into a collection of modules. pyam packages are defined as sub-collections of the available modules. Developers can set up private sandboxes for module/package development. All module/package development takes place on private SVN branches. High-level pyam commands support the setup, update, and release of modules and packages. Released and pre-built versions of modules are available to developers. Developers can tailor the source/link module mix for their sandboxes so that new sandboxes (even large ones) can be built up easily and quickly by pointing to pre-existing module releases. All inter-module interfaces are publicly exported via links. A minimal, but uniform, convention is used for building modules.

  12. Human Benchmarking of Expert Systems. Literature Review

    DTIC Science & Technology

    1990-01-01

    effetiveness of the development procedures used in order to predict whether the aplication of similar approaches will likely have effective and...they used in their learning and problem solving. We will describe these approaches later. Reasoning. Reasoning usually includes inference. Because to ... in the software engineering process. For example, existing approaches to software evaluation in the military are based on a model of conventional

  13. The use of Virtual Analogy Simulation (VAS) in physics learning

    NASA Astrophysics Data System (ADS)

    Faizin, M. Noor; Samsudin, A.

    2018-05-01

    The purpose of this research is to explore the use of VAS software in electrical dynamic learning in junior high student, so as to obtain an overview of this software consistency in help students build a scientific conception. This research was administered via research and Development (R & D) with the design of embedded experimental models. The respondents which were involved in this research were 60 students of ninth grade in one of junior high schools in Kudus central java. The improving process of students’ concept is examined based on normalized gain analysis from pretest and posttest scores. The result of this research shows that there was difference between learning using conventional learning (power point software) with VAS software. VAS is more effective to assist students in understanding the electrical dynamic concept shown with N-gain of 0.36, or 36 % were included in the medium category, whereas the conventional learning with N-gain of 0.28, or 28%.

  14. NetCDF-CF: Supporting Earth System Science with Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Davis, E.; Zender, C. S.; Arctur, D. K.; O'Brien, K.; Jelenak, A.; Santek, D.; Dixon, M. J.; Whiteaker, T. L.; Yang, K.

    2017-12-01

    NetCDF-CF is a community-developed convention for storing and describing earth system science data in the netCDF binary data format. It is an OGC recognized standard with numerous existing FOSS (Free and Open Source Software) and commercial software tools can explore, analyze, and visualize data that is stored and described as netCDF-CF data. To better support a larger segment of the earth system science community, a number of efforts are underway to extend the netCDF-CF convention with the goal of increasing the types of data that can be represented as netCDF-CF data. This presentation will provide an overview and update of work to extend the existing netCDF-CF convention. It will detail the types of earth system science data currently supported by netCDF-CF and the types of data targeted for support by current netCDF-CF convention development efforts. It will also describe some of the tools that support the use of netCDF-CF compliant datasets, the types of data they support, and efforts to extend them to handle the new data types that netCDF-CF will support.

  15. SwarmSight: Real-time Tracking of Insect Antenna Movements and Proboscis Extension Reflex Using a Common Preparation and Conventional Hardware

    PubMed Central

    Birgiolas, Justas; Jernigan, Christopher M.; Gerkin, Richard C.; Smith, Brian H.; Crook, Sharon M.

    2017-01-01

    Many scientifically and agriculturally important insects use antennae to detect the presence of volatile chemical compounds and extend their proboscis during feeding. The ability to rapidly obtain high-resolution measurements of natural antenna and proboscis movements and assess how they change in response to chemical, developmental, and genetic manipulations can aid the understanding of insect behavior. By extending our previous work on assessing aggregate insect swarm or animal group movements from natural and laboratory videos using the video analysis software SwarmSight, we developed a novel, free, and open-source software module, SwarmSight Appendage Tracking (SwarmSight.org) for frame-by-frame tracking of insect antenna and proboscis positions from conventional web camera videos using conventional computers. The software processes frames about 120 times faster than humans, performs at better than human accuracy, and, using 30 frames per second (fps) videos, can capture antennal dynamics up to 15 Hz. The software was used to track the antennal response of honey bees to two odors and found significant mean antennal retractions away from the odor source about 1 s after odor presentation. We observed antenna position density heat map cluster formation and cluster and mean angle dependence on odor concentration. PMID:29364251

  16. Conjunctive programming: An interactive approach to software system synthesis

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1992-01-01

    This report introduces a technique of software documentation called conjunctive programming and discusses its role in the development and maintenance of software systems. The report also describes the conjoin tool, an adjunct to assist practitioners. Aimed at supporting software reuse while conforming with conventional development practices, conjunctive programming is defined as the extraction, integration, and embellishment of pertinent information obtained directly from an existing database of software artifacts, such as specifications, source code, configuration data, link-edit scripts, utility files, and other relevant information, into a product that achieves desired levels of detail, content, and production quality. Conjunctive programs typically include automatically generated tables of contents, indexes, cross references, bibliographic citations, tables, and figures (including graphics and illustrations). This report presents an example of conjunctive programming by documenting the use and implementation of the conjoin program.

  17. Using a graphical programming language to write CAMAC/GPIB instrument drivers

    NASA Technical Reports Server (NTRS)

    Zambrana, Horacio; Johanson, William

    1991-01-01

    To reduce the complexities of conventional programming, graphical software was used in the development of instrumentation drivers. The graphical software provides a standard set of tools (graphical subroutines) which are sufficient to program the most sophisticated CAMAC/GPIB drivers. These tools were used and instrumentation drivers were successfully developed for operating CAMAC/GPIB hardware from two different manufacturers: LeCroy and DSP. The use of these tools is presented for programming a LeCroy A/D Waveform Analyzer.

  18. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction

    PubMed Central

    Venkatesan, R.

    2016-01-01

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649

  19. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.

    PubMed

    Kumudha, P; Venkatesan, R

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.

  20. High-precision measurements of cementless acetabular components using model-based RSA: an experimental study.

    PubMed

    Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld

    2007-08-01

    In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.

  1. Application Reuse Library for Software, Requirements, and Guidelines

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Thronesbery, Carroll

    1994-01-01

    Better designs are needed for expert systems and other operations automation software, for more reliable, usable and effective human support. A prototype computer-aided Application Reuse Library shows feasibility of supporting concurrent development and improvement of advanced software by users, analysts, software developers, and human-computer interaction experts. Such a library expedites development of quality software, by providing working, documented examples, which support understanding, modification and reuse of requirements as well as code. It explicitly documents and implicitly embodies design guidelines, standards and conventions. The Application Reuse Library provides application modules with Demo-and-Tester elements. Developers and users can evaluate applicability of a library module and test modifications, by running it interactively. Sub-modules provide application code and displays and controls. The library supports software modification and reuse, by providing alternative versions of application and display functionality. Information about human support and display requirements is provided, so that modifications will conform to guidelines. The library supports entry of new application modules from developers throughout an organization. Example library modules include a timer, some buttons and special fonts, and a real-time data interface program. The library prototype is implemented in the object-oriented G2 environment for developing real-time expert systems.

  2. Software Maintenance Exercises for a Software Engineering Project Course

    DTIC Science & Technology

    1989-02-01

    what is program style and how can it be measured? Program style has been defined as a "followed convention with respect to punctuation, capitalization ...convention with respect to punctuation, capitalization , and typographic arrangement and display." *DASC is a software tool that takes a syntactically...Specilleauons: A Frarnewo* * CM-12 Software Metrws CM- 13 Introduction to Softwarell Verification and Validation CM-14 Intelectual Property Protection for

  3. Multipurpose Educational Modules to Teach Hydraulic Hybrid Vehicle Technologies

    DOT National Transportation Integrated Search

    2007-09-01

    The goal of the overall project is to develop a software simulation for a hydraulic hybrid vehicle. The simulation will enable students to compare various hybrid configurations with conventional IC engine performance.

  4. OpenMebius: an open source software for isotopically nonstationary 13C-based metabolic flux analysis.

    PubMed

    Kajihata, Shuichi; Furusawa, Chikara; Matsuda, Fumio; Shimizu, Hiroshi

    2014-01-01

    The in vivo measurement of metabolic flux by (13)C-based metabolic flux analysis ((13)C-MFA) provides valuable information regarding cell physiology. Bioinformatics tools have been developed to estimate metabolic flux distributions from the results of tracer isotopic labeling experiments using a (13)C-labeled carbon source. Metabolic flux is determined by nonlinear fitting of a metabolic model to the isotopic labeling enrichment of intracellular metabolites measured by mass spectrometry. Whereas (13)C-MFA is conventionally performed under isotopically constant conditions, isotopically nonstationary (13)C metabolic flux analysis (INST-(13)C-MFA) has recently been developed for flux analysis of cells with photosynthetic activity and cells at a quasi-steady metabolic state (e.g., primary cells or microorganisms under stationary phase). Here, the development of a novel open source software for INST-(13)C-MFA on the Windows platform is reported. OpenMebius (Open source software for Metabolic flux analysis) provides the function of autogenerating metabolic models for simulating isotopic labeling enrichment from a user-defined configuration worksheet. Analysis using simulated data demonstrated the applicability of OpenMebius for INST-(13)C-MFA. Confidence intervals determined by INST-(13)C-MFA were less than those determined by conventional methods, indicating the potential of INST-(13)C-MFA for precise metabolic flux analysis. OpenMebius is the open source software for the general application of INST-(13)C-MFA.

  5. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  6. Space shuttle onboard navigation console expert/trainer system

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bochsler, Dan

    1987-01-01

    A software system for use in enhancing operational performance as well as training ground controllers in monitoring onboard Space Shuttle navigation sensors is described. The Onboard Navigation (ONAV) development reflects a trend toward following a structured and methodical approach to development. The ONAV system must deal with integrated conventional and expert system software, complex interfaces, and implementation limitations due to the target operational environment. An overview of the onboard navigation sensor monitoring function is presented, along with a description of guidelines driving the development effort, requirements that the system must meet, current progress, and future efforts.

  7. Modular space station, phase B extension. Information management advanced development. Volume 5: Software assembly

    NASA Technical Reports Server (NTRS)

    Gerber, C. R.

    1972-01-01

    The development of uniform computer program standards and conventions for the modular space station is discussed. The accomplishments analyzed are: (1) development of computer program specification hierarchy, (2) definition of computer program development plan, and (3) recommendations for utilization of all operating on-board space station related data processing facilities.

  8. Development and Operation of a Database Machine for Online Access and Update of a Large Database.

    ERIC Educational Resources Information Center

    Rush, James E.

    1980-01-01

    Reviews the development of a fault tolerant database processor system which replaced OCLC's conventional file system. A general introduction to database management systems and the operating environment is followed by a description of the hardware selection, software processes, and system characteristics. (SW)

  9. Clinical software development for the Web: lessons learned from the BOADICEA project

    PubMed Central

    2012-01-01

    Background In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. Results We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. Conclusions We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback. PMID:22490389

  10. Clinical software development for the Web: lessons learned from the BOADICEA project.

    PubMed

    Cunningham, Alex P; Antoniou, Antonis C; Easton, Douglas F

    2012-04-10

    In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback.

  11. Comparison of conventional study model measurements and 3D digital study model measurements from laser scanned dental impressions

    NASA Astrophysics Data System (ADS)

    Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.

    2017-08-01

    The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models

  12. Integrating interface slicing into software engineering processes

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    Interface slicing is a tool which was developed to facilitate software engineering. As previously presented, it was described in terms of its techniques and mechanisms. The integration of interface slicing into specific software engineering activities is considered by discussing a number of potential applications of interface slicing. The applications discussed specifically address the problems, issues, or concerns raised in a previous project. Because a complete interface slicer is still under development, these applications must be phrased in future tenses. Nonetheless, the interface slicing techniques which were presented can be implemented using current compiler and static analysis technology. Whether implemented as a standalone tool or as a module in an integrated development or reverse engineering environment, they require analysis no more complex than that required for current system development environments. By contrast, conventional slicing is a methodology which, while showing much promise and intuitive appeal, has yet to be fully implemented in a production language environment despite 12 years of development.

  13. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  14. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  15. Development of an Advanced Stimulation / Production Predictive Simulator for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritchett, John W.

    2015-04-15

    There are several well-known obstacles to the successful deployment of EGS projects on a commercial scale, of course. EGS projects are expected to be deeper, on the average, than conventional “natural” geothermal reservoirs, and drilling costs are already a formidable barrier to conventional geothermal projects. Unlike conventional resources (which frequently announce their presence with natural manifestations such as geysers, hot springs and fumaroles), EGS prospects are likely to appear fairly undistinguished from the earth surface. And, of course, the probable necessity of fabricating a subterranean fluid circulation network to mine the heat from the rock (instead of simply relying onmore » natural, pre-existing permeable fractures) adds a significant degree of uncertainty to the prospects for success. Accordingly, the basic motivation for the work presented herein was to try to develop a new set of tools that would be more suitable for this purpose. Several years ago, the Department of Energy’s Geothermal Technologies Office recognized this need and funded a cost-shared grant to our company (then SAIC, now Leidos) to partner with Geowatt AG of Zurich, Switzerland and undertake the development of a new reservoir simulator that would be more suitable for EGS forecasting than the existing tools. That project has now been completed and a new numerical geothermal reservoir simulator has been developed. It is named “HeatEx” (for “Heat Extraction”) and is almost completely new, although its methodology owes a great deal to other previous geothermal software development efforts, including Geowatt’s “HEX-S” code, the STAR and SPFRAC simulators developed here at SAIC/Leidos, the MINC approach originally developed at LBNL, and tracer analysis software originally formulated at INEL. Furthermore, the development effort was led by engineers with many years of experience in using reservoir simulation software to make meaningful forecasts for real geothermal projects, not just software designers. It is hoped that, as a result, HeatEx will prove useful during the early stages of the development of EGS technology. The basic objective was to design a tool that could use field data that are likely to become available during the early phases of an EGS project (that is, during initial reconnaissance and fracture stimulation operations) to guide forecasts of the longer-term behavior of the system during production and heat-mining.« less

  16. Robotic air vehicle. Blending artificial intelligence with conventional software

    NASA Technical Reports Server (NTRS)

    Mcnulty, Christa; Graham, Joyce; Roewer, Paul

    1987-01-01

    The Robotic Air Vehicle (RAV) system is described. The program's objectives were to design, implement, and demonstrate cooperating expert systems for piloting robotic air vehicles. The development of this system merges conventional programming used in passive navigation with Artificial Intelligence techniques such as voice recognition, spatial reasoning, and expert systems. The individual components of the RAV system are discussed as well as their interactions with each other and how they operate as a system.

  17. 2D projection-based software application for mobile C-arms optimises wire placement in the proximal femur - An experimental study.

    PubMed

    Swartman, B; Frere, D; Wei, W; Schnetzke, M; Beisemann, N; Keil, H; Franke, J; Grützner, P A; Vetter, S Y

    2017-10-01

    A new software application can be used without fixed reference markers or a registration process in wire placement. The aim was to compare placement of Kirschner wires (K-wires) into the proximal femur with the software application versus the conventional method without guiding. As study hypothesis, we assumed less placement attempts, shorter procedure time and shorter fluoroscopy time using the software. The same precision inside a proximal femur bone model using the software application was premised. The software detects a K-wire within the 2D fluoroscopic image. By evaluating its direction and tip location, it superimposes a trajectory on the image, visualizing the intended direction of the K-wire. The K-wire was positioned in 20 artificial bones with the use of software by one surgeon; 20 bones served as conventional controls. A brass thumb tack was placed into the femoral head and its tip targeted with the wire. Number of placement attempts, duration of the procedure, duration of fluoroscopy time and distance to the target in a postoperative 3D scan were recorded. Compared with the conventional method, use of the application showed fewer attempts for optimal wire placement (p=0.026), shorter duration of surgery (p=0.004), shorter fluoroscopy time (p=0.024) and higher precision (p=0.018). Final wire position was achieved in the first attempt in 17 out of 20 cases with the software and in 9 out of 20 cases with the conventional method. The study hypothesis was confirmed. The new application optimised the process of K-wire placement in the proximal femur in an artificial bone model while also improving precision. Benefits lie especially in the reduction of placement attempts and reduction of fluoroscopy time under the aspect of radiation protection. The software runs on a conventional image intensifier and can therefore be easily integrated into the daily surgical routine. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. CGNS Mid-Level Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Poirier, Diane; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: - The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; - The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; - The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and - The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The CGNS Mid-level Library was designed to ease the implementation of CGNS by providing developers with a collection of handy I/O functions. Since knowledge of the ADF core is not required to use this library, it will greatly facilitate the task of interfacing with CGNS. There are currently 48 user callable functions that comprise the Mid-level library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.

  19. Finite element analysis of container ship's cargo hold using ANSYS and POSEIDON software

    NASA Astrophysics Data System (ADS)

    Tanny, Tania Tamiz; Akter, Naznin; Amin, Osman Md.

    2017-12-01

    Nowadays ship structural analysis has become an integral part of the preliminary ship design providing further support for the development and detail design of ship structures. Structural analyses of container ship's cargo holds are carried out for the balancing of their safety and capacity, as those ships are exposed to the high risk of structural damage during voyage. Two different design methodologies have been considered for the structural analysis of a container ship's cargo hold. One is rule-based methodology and the other is a more conventional software based analyses. The rule based analysis is done by DNV-GL's software POSEIDON and the conventional package based analysis is done by ANSYS structural module. Both methods have been applied to analyze some of the mechanical properties of the model such as total deformation, stress-strain distribution, Von Mises stress, Fatigue etc., following different design bases and approaches, to indicate some guidance's for further improvements in ship structural design.

  20. Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2002-01-01

    The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.

  1. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  2. Appendix C: Rapid development approaches for system engineering and design

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Conventional system architectures, development processes, and tool environments often produce systems which exceed cost expectations and are obsolete before they are fielded. This paper explores some of the reasons for this and provides recommendations for how we can do better. These recommendations are based on DoD and NASA system developments and on our exploration and development of system/software engineering tools.

  3. Discharge-measurement system using an acoustic Doppler current profiler with applications to large rivers and estuaries

    USGS Publications Warehouse

    Simpson, Michael R.; Oltmann, Richard N.

    1993-01-01

    Discharge measurement of large rivers and estuaries is difficult, time consuming, and sometimes dangerous. Frequently, discharge measurements cannot be made in tide-affected rivers and estuaries using conventional discharge-measurement techniques because of dynamic discharge conditions. The acoustic Doppler discharge-measurement system (ADDMS) was developed by the U.S. Geological Survey using a vessel-mounted acoustic Doppler current profiler coupled with specialized computer software to measure horizontal water velocity at 1-meter vertical intervals in the water column. The system computes discharge from water-and vessel-velocity data supplied by the ADDMS using vector-algebra algorithms included in the discharge-measurement software. With this system, a discharge measurement can be obtained by engaging the computer software and traversing a river or estuary from bank to bank; discharge in parts of the river or estuarine cross sections that cannot be measured because of ADDMS depth limitations are estimated by the system. Comparisons of ADDMS-measured discharges with ultrasonic-velocity-meter-measured discharges, along with error-analysis data, have confirmed that discharges provided by the ADDMS are at least as accurate as those produced using conventional methods. In addition, the advantage of a much shorter measurement time (2 minutes using the ADDMS compared with 1 hour or longer using conventional methods) has enabled use of the ADDMS for several applications where conventional discharge methods could not have been used with the required accuracy because of dynamic discharge conditions.

  4. A survey of program slicing for software engineering

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    This research concerns program slicing which is used as a tool for program maintainence of software systems. Program slicing decreases the level of effort required to understand and maintain complex software systems. It was first designed as a debugging aid, but it has since been generalized into various tools and extended to include program comprehension, module cohesion estimation, requirements verification, dead code elimination, and maintainence of several software systems, including reverse engineering, parallelization, portability, and reuse component generation. This paper seeks to address and define terminology, theoretical concepts, program representation, different program graphs, developments in static slicing, dynamic slicing, and semantics and mathematical models. Applications for conventional slicing are presented, along with a prognosis of future work in this field.

  5. Virtual chromoendoscopy can be a useful software tool in capsule endoscopy.

    PubMed

    Duque, Gabriela; Almeida, Nuno; Figueiredo, Pedro; Monsanto, Pedro; Lopes, Sandra; Freire, Paulo; Ferreira, Manuela; Carvalho, Rita; Gouveia, Hermano; Sofia, Carlos

    2012-05-01

    capsule endoscopy (CE) has revolutionized the study of small bowel. One major drawback of this technique is that we cannot interfere with image acquisition process. Therefore, the development of new software tools that could modify the images and increase both detection and diagnosis of small-bowel lesions would be very useful. The Flexible Spectral Imaging Color Enhancement (FICE) that allows for virtual chromoendoscopy is one of these software tools. to evaluate the reproducibility and diagnostic accuracy of the FICE system in CE. this prospective study involved 20 patients. First, four physicians interpreted 150 static FICE images and the overall agreement between them was determined using the Fleiss Kappa Test. Second, two experienced gastroenterologists, blinded to each other results, analyzed the complete 20 video streams. One interpreted conventional capsule videos and the other, the CE-FICE videos at setting 2. All findings were reported, regardless of their clinical value. Non-concordant findings between both interpretations were analyzed by a consensus panel of four gastroenterologists who reached a final result (positive or negative finding). in the first arm of the study the overall concordance between the four gastroenterologists was substantial (0.650). In the second arm, the conventional mode identified 75 findings and the CE-FICE mode 95. The CE-FICE mode did not miss any lesions identified by the conventional mode and allowed the identification of a higher number of angiodysplasias (35 vs 32), and erosions (41 vs. 24). there is reproducibility for the interpretation of CE-FICE images between different observers experienced in conventional CE. The use of virtual chromoendoscopy in CE seems to increase its diagnostic accuracy by highlighting small bowel erosions and angiodysplasias that weren´t identified by the conventional mode.

  6. Web-based training: a new paradigm in computer-assisted instruction in medicine.

    PubMed

    Haag, M; Maylein, L; Leven, F J; Tönshoff, B; Haux, R

    1999-01-01

    Computer-assisted instruction (CAI) programs based on internet technologies, especially on the world wide web (WWW), provide new opportunities in medical education. The aim of this paper is to examine different aspects of such programs, which we call 'web-based training (WBT) programs', and to differentiate them from conventional CAI programs. First, we will distinguish five different interaction types: presentation; browsing; tutorial dialogue; drill and practice; and simulation. In contrast to conventional CAI, there are four architectural types of WBT programs: client-based; remote data and knowledge; distributed teaching; and server-based. We will discuss the implications of the different architectures for developing WBT software. WBT programs have to meet other requirements than conventional CAI programs. The most important tools and programming languages for developing WBT programs will be listed and assigned to the architecture types. For the future, we expect a trend from conventional CAI towards WBT programs.

  7. Expert System Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    C Language Integrated Production System (CLIPS) is a software shell for developing expert systems is designed to allow research and development of artificial intelligence on conventional computers. Originally developed by Johnson Space Center, it enables highly efficient pattern matching. A collection of conditions and actions to be taken if the conditions are met is built into a rule network. Additional pertinent facts are matched to the rule network. Using the program, E.I. DuPont de Nemours & Co. is monitoring chemical production machines; California Polytechnic State University is investigating artificial intelligence in computer aided design; Mentor Graphics has built a new Circuit Synthesis system, and Brooke and Brooke, a law firm, can determine which facts from a file are most important.

  8. Conventions and workflows for using Situs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wriggers, Willy, E-mail: wriggers@biomachina.org

    2012-04-01

    Recent developments of the Situs software suite for multi-scale modeling are reviewed. Typical workflows and conventions encountered during processing of biophysical data from electron microscopy, tomography or small-angle X-ray scattering are described. Situs is a modular program package for the multi-scale modeling of atomic resolution structures and low-resolution biophysical data from electron microscopy, tomography or small-angle X-ray scattering. This article provides an overview of recent developments in the Situs package, with an emphasis on workflows and conventions that are important for practical applications. The modular design of the programs facilitates scripting in the bash shell that allows specific programs tomore » be combined in creative ways that go beyond the original intent of the developers. Several scripting-enabled functionalities, such as flexible transformations of data type, the use of symmetry constraints or the creation of two-dimensional projection images, are described. The processing of low-resolution biophysical maps in such workflows follows not only first principles but often relies on implicit conventions. Situs conventions related to map formats, resolution, correlation functions and feature detection are reviewed and summarized. The compatibility of the Situs workflow with CCP4 conventions and programs is discussed.« less

  9. The KATE shell: An implementation of model-based control, monitor and diagnosis

    NASA Technical Reports Server (NTRS)

    Cornell, Matthew

    1987-01-01

    The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.

  10. ICW eHealth Framework.

    PubMed

    Klein, Karsten; Wolff, Astrid C; Ziebold, Oliver; Liebscher, Thomas

    2008-01-01

    The ICW eHealth Framework (eHF) is a powerful infrastructure and platform for the development of service-oriented solutions in the health care business. It is the culmination of many years of experience of ICW in the development and use of in-house health care solutions and represents the foundation of ICW product developments based on the Java Enterprise Edition (Java EE). The ICW eHealth Framework has been leveraged to allow development by external partners - enabling adopters a straightforward integration into ICW solutions. The ICW eHealth Framework consists of reusable software components, development tools, architectural guidelines and conventions defining a full software-development and product lifecycle. From the perspective of a partner, the framework provides services and infrastructure capabilities for integrating applications within an eHF-based solution. This article introduces the ICW eHealth Framework's basic architectural concepts and technologies. It provides an overview of its module and component model, describes the development platform that supports the complete software development lifecycle of health care applications and outlines technological aspects, mainly focusing on application development frameworks and open standards.

  11. Advanced Automation for Ion Trap Mass Spectrometry-New Opportunities for Real-Time Autonomous Analysis

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, C. M.; Salmonson, J. D.; Yost, R. A.; Griffin, T. P.; Yates, N. A.; Lawless, James G. (Technical Monitor)

    1994-01-01

    The utility of MS/MS for both target compound analysis and the structure elucidation of unknowns has been described in a number of references. A broader acceptance of this technique has not yet been realized as it requires large, complex, and costly instrumentation which has not been competitive with more conventional techniques. Recent advancements in ion trap mass spectrometry promise to change this situation. Although the ion trap's small size, sensitivity, and ability to perform multiple stages of mass spectrometry have made it eminently suitable for on-line, real-time monitoring applications, advance automation techniques are required to make these capabilities more accessible to non-experts. Towards this end we have developed custom software for the design and implementation of MS/MS experiments. This software allows the user to take full advantage of the ion trap's versatility with respect to ionization techniques, scan proxies, and ion accumulation/ejection methods. Additionally, expert system software has been developed for autonomous target compound analysis. This software has been linked to ion trap control software and a commercial data system to bring all of the steps in the analysis cycle under control of the expert system. These software development efforts and their utilization for a number of trace analysis applications will be described.

  12. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  13. Courseware Development with Animated Pedagogical Agents in Learning System to Improve Learning Motivation

    ERIC Educational Resources Information Center

    Chin, Kai-Yi; Hong, Zeng-Wei; Huang, Yueh-Min; Shen, Wei-Wei; Lin, Jim-Min

    2016-01-01

    The addition of animated pedagogical agents (APAs) in computer-assisted learning (CAL) systems could successfully enhance students' learning motivation and engagement in learning activities. Conventionally, the APA incorporated multimedia materials are constructed through the cooperation of teachers and software programmers. However, the thinking…

  14. The Effect of Interactive CD-ROM/Digitized Audio Courseware on Reading among Low-Literate Adults.

    ERIC Educational Resources Information Center

    Gretes, John A.; Green, Michael

    1994-01-01

    Compares a multimedia adult literacy instructional course, Reading to Educate and Develop Yourself (READY), to traditional classroom instruction by studying effects of replacing conventional learning tools with computer-assisted instruction (CD-ROMs and audio software). Results reveal that READY surpassed traditional instruction for virtually…

  15. Appendix B: Rapid development approaches for system engineering and design

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Conventional processes often produce systems which are obsolete before they are fielded. This paper explores some of the reasons for this, and provides a vision of how we can do better. This vision is based on our explorations in improved processes and system/software engineering tools.

  16. EVA: Collaborative Distributed Learning Environment Based in Agents.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Tellez, Rolando Quintero

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…

  17. EVA: An Interactive Web-Based Collaborative Learning Environment

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Arenas, Adolfo Guzman

    2002-01-01

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is described. The environment is composed of knowledge, collaboration, consulting and experimentation spaces as a collection of agents and conventional software components working over the knowledge domains. All user…

  18. Novel Software for Performing Leksell Stereotactic Surgery without the Use of Printing Films: Technical Note.

    PubMed

    Hashizume, Akira; Akimitsu, Tomohide; Iida, Koji; Kagawa, Kota; Katagiri, Masaya; Hanaya, Ryosuke; Arita, Kazunori; Kurisu, Kaoru

    2016-01-01

    Hospitals in Japan have recently begun to employ the DICOM viewer system on desktop or laptop monitors. However, conventional embedding surgery for deep-brain stimulation with the Leksell stereotactic system (LSS) requires printed X-ray films for defining the coordination, coregistration of actual surgical films with the reference coordinates, and validation of the needle trajectories. While just performing these procedures on desktop or laptop monitors, the authors were able to develop novel software to facilitate complete digital manipulation with the Leksell frame without printing films. In this study, we validated the practical use of LSS, and benefit of this software in the Takanobashi Central Hospital and Kagoshima University Hospital.

  19. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.

  20. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahanani, Nursinta Adi, E-mail: sintaadi@batan.go.id; Natsir, Khairina, E-mail: sintaadi@batan.go.id; Hartini, Entin, E-mail: sintaadi@batan.go.id

    Data processing software packages such as VSOP and MCNPX are softwares that has been scientifically proven and complete. The result of VSOP and MCNPX are huge and complex text files. In the analyze process, user need additional processing like Microsoft Excel to show informative result. This research develop an user interface software for output of VSOP and MCNPX. VSOP program output is used to support neutronic analysis and MCNPX program output is used to support burn-up analysis. Software development using iterative development methods which allow for revision and addition of features according to user needs. Processing time with this softwaremore » 500 times faster than with conventional methods using Microsoft Excel. PYTHON is used as a programming language, because Python is available for all major operating systems: Windows, Linux/Unix, OS/2, Mac, Amiga, among others. Values that support neutronic analysis are k-eff, burn-up and mass Pu{sup 239} and Pu{sup 241}. Burn-up analysis used the mass inventory values of actinide (Thorium, Plutonium, Neptunium and Uranium). Values are visualized in graphical shape to support analysis.« less

  2. Real-Time 12-Lead High-Frequency QRS Electrocardiography for Enhanced Detection of Myocardial Ischemia and Coronary Artery Disease

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Kulecz, Walter B.; DePalma, Jude L.; Feiveson, Alan H.; Wilson, John S.; Rahman, M. Atiar; Bungo, Michael W.

    2004-01-01

    Several studies have shown that diminution of the high-frequency (HF; 150-250 Hz) components present within the central portion of the QRS complex of an electrocardiogram (ECG) is a more sensitive indicator for the presence of myocardial ischemia than are changes in the ST segments of the conventional low-frequency ECG. However, until now, no device has been capable of displaying, in real time on a beat-to-beat basis, changes in these HF QRS ECG components in a continuously monitored patient. Although several software programs have been designed to acquire the HF components over the entire QRS interval, such programs have involved laborious off-line calculations and postprocessing, limiting their clinical utility. We describe a personal computer-based ECG software program developed recently at the National Aeronautics and Space Administration (NASA) that acquires, analyzes, and displays HF QRS components in each of the 12 conventional ECG leads in real time. The system also updates these signals and their related derived parameters in real time on a beat-to-beat basis for any chosen monitoring period and simultaneously displays the diagnostic information from the conventional (low-frequency) 12-lead ECG. The real-time NASA HF QRS ECG software is being evaluated currently in multiple clinical settings in North America. We describe its potential usefulness in the diagnosis of myocardial ischemia and coronary artery disease.

  3. Automating software design system DESTA

    NASA Technical Reports Server (NTRS)

    Lovitsky, Vladimir A.; Pearce, Patricia D.

    1992-01-01

    'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.

  4. Multiple-Parameter, Low-False-Alarm Fire-Detection Systems

    NASA Technical Reports Server (NTRS)

    Hunter, Gary W.; Greensburg, Paul; McKnight, Robert; Xu, Jennifer C.; Liu, C. C.; Dutta, Prabir; Makel, Darby; Blake, D.; Sue-Antillio, Jill

    2007-01-01

    Fire-detection systems incorporating multiple sensors that measure multiple parameters are being developed for use in storage depots, cargo bays of ships and aircraft, and other locations not amenable to frequent, direct visual inspection. These systems are intended to improve upon conventional smoke detectors, now used in such locations, that reliably detect fires but also frequently generate false alarms: for example, conventional smoke detectors based on the blockage of light by smoke particles are also affected by dust particles and water droplets and, thus, are often susceptible to false alarms. In contrast, by utilizing multiple parameters associated with fires, i.e. not only obscuration by smoke particles but also concentrations of multiple chemical species that are commonly generated in combustion, false alarms can be significantly decreased while still detecting fires as reliably as older smoke-detector systems do. The present development includes fabrication of sensors that have, variously, micrometer- or nanometer-sized features so that such multiple sensors can be integrated into arrays that have sizes, weights, and power demands smaller than those of older macroscopic sensors. The sensors include resistors, electrochemical cells, and Schottky diodes that exhibit different sensitivities to the various airborne chemicals of interest. In a system of this type, the sensor readings are digitized and processed by advanced signal-processing hardware and software to extract such chemical indications of fires as abnormally high concentrations of CO and CO2, possibly in combination with H2 and/or hydrocarbons. The system also includes a microelectromechanical systems (MEMS)-based particle detector and classifier device to increase the reliability of measurements of chemical species and particulates. In parallel research, software for modeling the evolution of a fire within an aircraft cargo bay has been developed. The model implemented in the software can describe the concentrations of chemical species and of particulate matter as functions of time. A system of the present developmental type and a conventional fire detector were tested under both fire and false-alarm conditions in a Federal Aviation Administration cargo-compartment- testing facility. Both systems consistently detected fires. However, the conventional fire detector consistently generated false alarms, whereas the developmental system did not generate any false alarms.

  5. Three-dimensional modeling of light rays on the surface of a slanted lenticular array for autostereoscopic displays.

    PubMed

    Jung, Sung-Min; Kang, In-Byeong

    2013-08-10

    In this paper, we developed an optical model describing the behavior of light at the surface of a slanted lenticular array for autostereoscopic displays in three dimensions and simulated the optical characteristics of autostereoscopic displays using the Monte Carlo method under actual design conditions. The behavior of light is analyzed by light rays for selected inclination and azimuthal angles; numerical aberrations and conditions of total internal reflection for the lenticular array were found. The intensity and the three-dimensional crosstalk distributions calculated from our model coincide very well with those from conventional design software, and our model shows highly enhanced calculation speed that is 67 times faster than that of the conventional software. From the results, we think that the optical model is very useful for predicting the optical characteristics of autostereoscopic displays with enhanced calculation speed.

  6. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac.

    PubMed

    Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen

    2016-04-01

    To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.

  7. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen

    Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less

  8. Temperature distribution of thick thermoset composites

    NASA Astrophysics Data System (ADS)

    Guo, Zhan-Sheng; Du, Shanyi; Zhang, Boming

    2004-05-01

    The development of temperature distribution of thick polymeric matrix laminates during an autoclave vacuum bag process was measured and compared with numerically calculated results. The finite element formulation of the transient heat transfer problem was carried out for polymeric matrix composite materials from the heat transfer differential equations including internal heat generation produced by exothermic chemical reactions. Software based on the general finite element software package was developed for numerical simulation of the entire composite process. From the experimental and numerical results, it was found that the measured temperature profiles were in good agreement with the numerical ones, and conventional cure cycles recommended by prepreg manufacturers for thin laminates should be modified to prevent temperature overshoot.

  9. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  10. EpHLA software: a timesaving and accurate tool for improving identification of acceptable mismatches for clinical purposes.

    PubMed

    Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad

    2012-06-01

    The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.

  11. Development of online NIR urine analyzing system based on AOTF

    NASA Astrophysics Data System (ADS)

    Wan, Feng; Sun, Zhendong; Li, Xiaoxia

    2006-09-01

    In this paper, some key techniques on development of on-line MR urine analyzing system based on AOTF (Acousto - Optics Tunable Filter) are introduced. Problems about designing the optical system including collimation of incident light and working distance (the shortest distance for separating incident light and diffracted light) are analyzed and researched. DDS (Direct Digital Synthesizer) controlled by microprocessor is used to realize the wavelength scan. The experiment results show that this MR urine analyzing system based on. AOTF has 10000 - 4000cm -1 wavelength range and O.3ms wavelength transfer rate. Compare with the conventional Fourier Transform NIP. spectrophotometer for analyzing multi-components in urine, this system features low cost, small volume and on-line measurement function. Unscrambler software (multivariate statistical software by CAMO Inc. Norway) is selected as the software for processing the data. This system can realize on line quantitative analysis of protein, urea and creatinine in urine.

  12. State analysis requirements database for engineering complex embedded systems

    NASA Technical Reports Server (NTRS)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  13. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    NASA Astrophysics Data System (ADS)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  14. A Thermal Expert System (TEXSYS) development overview - AI-based control of a Space Station prototype thermal bus

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hack, E. C.

    1990-01-01

    A knowledge-based control system for real-time control and fault detection, isolation and recovery (FDIR) of a prototype two-phase Space Station Freedom external thermal control system (TCS) is discussed in this paper. The Thermal Expert System (TEXSYS) has been demonstrated in recent tests to be capable of both fault anticipation and detection and real-time control of the thermal bus. Performance requirements were achieved by using a symbolic control approach, layering model-based expert system software on a conventional numerical data acquisition and control system. The model-based capabilities of TEXSYS were shown to be advantageous during software development and testing. One representative example is given from on-line TCS tests of TEXSYS. The integration and testing of TEXSYS with a live TCS testbed provides some insight on the use of formal software design, development and documentation methodologies to qualify knowledge-based systems for on-line or flight applications.

  15. Improving Earth Science Metadata: Modernizing ncISO

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  16. Engineering Complex Embedded Systems with State Analysis and the Mission Data System

    NASA Technical Reports Server (NTRS)

    Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.

  17. Software Analyzes Complex Systems in Real Time

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.

  18. A Discussion of the Software Quality Assurance Role

    NASA Technical Reports Server (NTRS)

    Kandt, Ronald Kirk

    2010-01-01

    The basic idea underlying this paper is that the conventional understanding of the role of a Software Quality Assurance (SQA) engineer is unduly limited. This is because few have asked who the customers of a SQA engineer are. Once you do this, you can better define what tasks a SQA engineer should perform, as well as identify the knowledge and skills that such a person should have. The consequence of doing this is that a SQA engineer can provide greater value to his or her customers. It is the position of this paper that a SQA engineer providing significant value to his or her customers must not only assume the role of an auditor, but also that of a software and systems engineer. This is because software engineers and their managers particularly value contributions that directly impact products and their development. These ideas are summarized as lessons learned, based on my experience at Jet Propulsion Laboratory (JPL).

  19. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4.

    PubMed

    Schober, Daniel; Tudose, Ilinca; Svatek, Vojtech; Boeker, Martin

    2012-09-21

    Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers.

  20. Knowledge base methodology: Methodology for first Engineering Script Language (ESL) knowledge base

    NASA Technical Reports Server (NTRS)

    Peeris, Kumar; Izygon, Michel E.

    1992-01-01

    The primary goal of reusing software components is that software can be developed faster, cheaper and with higher quality. Though, reuse is not automatic and can not just happen. It has to be carefully engineered. For example a component needs to be easily understandable in order to be reused, and it has also to be malleable enough to fit into different applications. In fact the software development process is deeply affected when reuse is being applied. During component development, a serious effort has to be directed toward making these components as reusable. This implies defining reuse coding style guidelines and applying then to any new component to create as well as to any old component to modify. These guidelines should point out the favorable reuse features and may apply to naming conventions, module size and cohesion, internal documentation, etc. During application development, effort is shifted from writing new code toward finding and eventually modifying existing pieces of code, then assembling them together. We see here that reuse is not free, and therefore has to be carefully managed.

  1. Perspective on intelligent avionics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H.L.

    1987-01-01

    Technical issues which could potentially limit the capability and acceptibility of expert systems decision-making for avionics applications are addressed. These issues are: real-time AI, mission-critical software, conventional algorithms, pilot interface, knowledge acquisition, and distributed expert systems. Examples from on-going expert system development programs are presented to illustrate likely architectures and applications of future intelligent avionic systems. 13 references.

  2. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  3. Efficiency enhancement for natural gas liquefaction with CO2 capture and sequestration through cycles innovation and process optimization

    NASA Astrophysics Data System (ADS)

    Alabdulkarem, Abdullah

    Liquefied natural gas (LNG) plants are energy intensive. As a result, the power plants operating these LNG plants emit high amounts of CO2 . To mitigate global warming that is caused by the increase in atmospheric CO2, CO2 capture and sequestration (CCS) using amine absorption is proposed. However, the major challenge of implementing this CCS system is the associated power requirement, increasing power consumption by about 15--25%. Therefore, the main scope of this work is to tackle this challenge by minimizing CCS power consumption as well as that of the entire LNG plant though system integration and rigorous optimization. The power consumption of the LNG plant was reduced through improving the process of liquefaction itself. In this work, a genetic algorithm (GA) was used to optimize a propane pre-cooled mixed-refrigerant (C3-MR) LNG plant modeled using HYSYS software. An optimization platform coupling Matlab with HYSYS was developed. New refrigerant mixtures were found, with savings in power consumption as high as 13%. LNG plants optimization with variable natural gas feed compositions was addressed and the solution was proposed through applying robust optimization techniques, resulting in a robust refrigerant which can liquefy a range of natural gas feeds. The second approach for reducing the power consumption is through process integration and waste heat utilization in the integrated CCS system. Four waste heat sources and six potential uses were uncovered and evaluated using HYSYS software. The developed models were verified against experimental data from the literature with good agreement. Net available power enhancement in one of the proposed CCS configuration is 16% more than the conventional CCS configuration. To reduce the CO2 pressurization power into a well for enhanced oil recovery (EOR) applications, five CO2 pressurization methods were explored. New CO2 liquefaction cycles were developed and modeled using HYSYS software. One of the developed liquefaction cycles using NH3 as a refrigerant resulted in 5% less power consumption than the conventional multi-stage compression cycle. Finally, a new concept of providing the CO2 regeneration heat is proposed. The proposed concept is using a heat pump to provide the regeneration heat as well as process heat and CO2 liquefaction heat. Seven configurations of heat pumps integrated with CCS were developed. One of the heat pumps consumes 24% less power than the conventional system or 59% less total equivalent power demand than the conventional system with steam extraction and CO2 compression.

  4. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  5. Advanced Data Format (ADF) Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Smith, Matthew; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF Core library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.

  6. Hybrid Energy System Design of Micro Hydro-PV-biogas Based Micro-grid

    NASA Astrophysics Data System (ADS)

    Nishrina; Abdullah, A. G.; Risdiyanto, A.; Nandiyanto, ABD

    2017-03-01

    Hybrid renewable energy system is an arrangement of one or more sources of renewable energy and also conventional energy. This paper describes a simulation results of hybrid renewable power system based on the available potential in an educational institution in Indonesia. HOMER software was used to simulate and analyse both in terms of optimization and economic terms. This software was developed through 3 main principles; simulation, optimization, and sensitivity analysis. Generally, the presented results show that the software can demonstrate a feasible hybrid power system as well to be realized. The entire demand in case study area can be supplied by the system configuration and can be met by ¾ of electricity production. So, there are ¼ of generated energy became an excess electricity.

  7. Use of Soft Computing Technologies For Rocket Engine Control

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Olcmen, Semih; Polites, Michael

    2003-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to further improve overall engine system reliability and performance. Specifically, this will be presented by enhancing rocket engine control and engine health management (EHM) using SCT coupled with conventional control technologies, and sound software engineering practices used in Marshall s Flight Software Group. The principle goals are to improve software management, software development time and maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control and EHM methodologies, but to provide alternative design choices for control, EHM, implementation, performance, and sustaining engineering. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion, software engineering for embedded systems, and soft computing technologies (i.e., neural networks, fuzzy logic, and Bayesian belief networks), much of which is presented in this paper. The first targeted demonstration rocket engine platform is the MC-1 (formerly FASTRAC Engine) which is simulated with hardware and software in the Marshall Avionics & Software Testbed laboratory that

  8. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  9. Managing Scientific Software Complexity with Bocca and CCA

    DOE PAGES

    Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...

    2008-01-01

    In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less

  10. Preparing your Offshore Organization for Agility: Experiences in India

    NASA Astrophysics Data System (ADS)

    Srinivasan, Jayakanth

    Two strategies that have significantly changed the way we conventionally think about managing software development and sustainment are the family of development approaches collectively referred to as agile methods, and the distribution of development efforts on a global scale. When you combine the two strategies, organizations have to address not only the technical challenges that arise from introducing new ways of working, but more importantly have to manage the 'soft' factors that if ignored lead to hard challenges. Using two case studies of distributed agile software development in India we illustrate the areas that organizations need to be aware of when transitioning work to India. The key issues that we emphasize are the need to recruit and retain personnel; the importance of teaching, mentoring and coaching; the need to manage customer expectations; the criticality of well-articulated senior leadership vision and commitment; and the reality of operating in a heterogeneous process environment.

  11. SenseCube—a novel inexpensive wireless multisensor for physics lab experimentations

    NASA Astrophysics Data System (ADS)

    Mehta, Vedant; Lane, Charles D.

    2018-07-01

    SenseCube is a multisensor capable of measuring many different real-time events and changes in environment. Most conventional sensors used in introductory-physics labs use their own software and have wires that must be attached to a computer or an alternate device to analyze the data. This makes the standard sensors time consuming, tedious, and space-constricted. SenseCube was developed to overcome these limitations. This research was focused on developing a device that is all-encompassing, cost-effective, wireless, and compact, yet can perform the same tasks as the multiple standard sensors normally used in physics labs. It measures more than twenty distinct types of real-time events and transfers the data via Bluetooth. Both Windows and Mac software were developed so that the data from this device can be retrieved and/or saved on either platform. This paper describes the sensor itself, its development, its capabilities, and its cost comparison with standard sensors.

  12. Ankle-Foot Orthosis Made by 3D Printing Technique and Automated Design Software

    PubMed Central

    Cha, Yong Ho; Lee, Keun Ho; Ryu, Hong Jong; Joo, Il Won; Seo, Anna; Kim, Dong-Hyeon

    2017-01-01

    We described 3D printing technique and automated design software and clinical results after the application of this AFO to a patient with a foot drop. After acquiring a 3D modelling file of a patient's lower leg with peroneal neuropathy by a 3D scanner, we loaded this file on the automated orthosis software and created the “STL” file. The designed AFO was printed using a fused filament fabrication type 3D printer, and a mechanical stress test was performed. The patient alternated between the 3D-printed and conventional AFOs for 2 months. There was no crack or damage, and the shape and stiffness of the AFO did not change after the durability test. The gait speed increased after wearing the conventional AFO (56.5 cm/sec) and 3D-printed AFO (56.5 cm/sec) compared to that without an AFO (42.2 cm/sec). The patient was more satisfied with the 3D-printed AFO than the conventional AFO in terms of the weight and ease of use. The 3D-printed AFO exhibited similar functionality as the conventional AFO and considerably satisfied the patient in terms of the weight and ease of use. We suggest the possibility of the individualized AFO with 3D printing techniques and automated design software. PMID:28827977

  13. Waste gasification vs. conventional Waste-to-Energy: a comparative evaluation of two commercial technologies.

    PubMed

    Consonni, Stefano; Viganò, Federico

    2012-04-01

    A number of waste gasification technologies are currently proposed as an alternative to conventional Waste-to-Energy (WtE) plants. Assessing their potential is made difficult by the scarce operating experience and the fragmentary data available. After defining a conceptual framework to classify and assess waste gasification technologies, this paper compares two of the proposed technologies with conventional WtE plants. Performances are evaluated by proprietary software developed at Politecnico di Milano and compared on the basis of a coherent set of assumptions. Since the two gasification technologies are configured as "two-step oxidation" processes, their energy performances are very similar to those of conventional plants. The potential benefits that may justify their adoption relate to material recovery and operation/emission control: recovery of metals in non-oxidized form; collection of ashes in inert, vitrified form; combustion control; lower generation of some pollutants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. The EMIR experience in the use of software control simulators to speed up the time to telescope

    NASA Astrophysics Data System (ADS)

    Lopez Ramos, Pablo; López-Ruiz, J. C.; Moreno Arce, Heidy; Rosich, Josefina; Perez Menor, José Maria

    2012-09-01

    One of the main problems facing development teams working on instrument control systems consists on the need to access mechanisms which are not available until well into the integration phase. The need to work with real hardware creates additional problems like, among others: certain faults cannot be tested due to the possibility of hardware damage, taking the system to the limit may shorten its operational lifespan and the full system may not be available during some periods due to maintenance and/or testing of individual components. These problems can be treated with the use of simulators and by applying software/hardware standards. Since information on the construction and performance of electro-mechanical systems is available at relatively early stages of the project, simulators are developed in advance (before the existence of the mechanism) or, if conventions and standards have been correctly followed, a previously developed simulator might be used. This article describes our experience in building software simulators and the main advantages we have identified, which are: the control software can be developed even in the absence of real hardware, critical tests can be prepared using the simulated systems, test system behavior for hardware failure situations that represent a risk of the real system, and the speed up of in house integration of the entire instrument. The use of simulators allows us to reduce development, testing and integration time.

  15. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    PubMed

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  16. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes

    PubMed Central

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo

    2016-01-01

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298

  17. The Implementation of Satellite Control System Software Using Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Anderson, Mark O.; Reid, Mark; Drury, Derek; Hansell, William; Phillips, Tom

    1998-01-01

    NASA established the Small Explorer (SMEX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions that can be launched into low earth orbit by small expendable vehicles. The development schedule for each SMEX spacecraft was three years from start to launch. The SMEX program has produced five satellites; Solar Anomalous and Magnetospheric Particle Explorer (SAMPEX), Fast Auroral Snapshot Explorer (FAST), Submillimeter Wave Astronomy Satellite (SWAS), Transition Region and Coronal Explorer (TRACE) and Wide-Field Infrared Explorer (WIRE). SAMPEX and FAST are on-orbit, TRACE is scheduled to be launched in April of 1998, WIRE is scheduled to be launched in September of 1998, and SWAS is scheduled to be launched in January of 1999. In each of these missions, the Attitude Control System (ACS) software was written using a modular procedural design. Current program goals require complete spacecraft development within 18 months. This requirement has increased pressure to write reusable flight software. Object-Oriented Design (OOD) offers the constructs for developing an application that only needs modification for mission unique requirements. This paper describes the OOD that was used to develop the SMEX-Lite ACS software. The SMEX-Lite ACS is three-axis controlled, momentum stabilized, and is capable of performing sub-arc-minute pointing. The paper first describes the high level requirements which governed the architecture of the SMEX-Lite ACS software. Next, the context in which the software resides is explained. The paper describes the benefits of encapsulation, inheritance and polymorphism with respect to the implementation of an ACS software system. This paper will discuss the design of several software components that comprise the ACS software. Specifically, Object-Oriented designs are presented for sensor data processing, attitude control, attitude determination and failure detection. The paper addresses the benefits of the OOD versus a conventional procedural design. The final discussion in this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects, saving production time and costs.

  18. HOMER® Energy Modeling Software 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2003-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  19. HOMER® Energy Modeling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2000-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  20. Applied Research Study

    NASA Technical Reports Server (NTRS)

    Leach, Ronald J.

    1997-01-01

    The purpose of this project was to study the feasibility of reusing major components of a software system that had been used to control the operations of a spacecraft launched in the 1980s. The study was done in the context of a ground data processing system that was to be rehosted from a large mainframe to an inexpensive workstation. The study concluded that a systematic approach using inexpensive tools could aid in the reengineering process by identifying a set of certified reusable components. The study also developed procedures for determining duplicate versions of software, which were created because of inadequate naming conventions. Such procedures reduced reengineering costs by approximately 19.4 percent.

  1. Rapid Prototyping: A Survey and Evaluation of Methodologies and Models

    DTIC Science & Technology

    1990-03-01

    possibility of program coding errors or design differences from the actual prototype the user validated. The method - ology should result in a production...behavior within the problem domain to be defned. "Each method has a different approach towards developing the set of symbols with which to define the...investigate prototyping as a viable alternative to the conventional method of software development. By the mid 1980’s, it was evi- dent that the traditional

  2. Artificial Neural Networks-Based Software for Measuring Heat Collection Rate and Heat Loss Coefficient of Water-in-Glass Evacuated Tube Solar Water Heaters

    PubMed Central

    Liu, Zhijian; Liu, Kejun; Li, Hao; Zhang, Xinyu; Jin, Guangya; Cheng, Kewei

    2015-01-01

    Measurements of heat collection rate and heat loss coefficient are crucial for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, conventional measurement requires expensive detection devices and undergoes a series of complicated procedures. To simplify the measurement and reduce the cost, software based on artificial neural networks for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters was developed. Using multilayer feed-forward neural networks with back-propagation algorithm, we developed and tested our program on the basis of 915measuredsamples of water-in-glass evacuated tube solar water heaters. This artificial neural networks-based software program automatically obtained accurate heat collection rateand heat loss coefficient using simply "portable test instruments" acquired parameters, including tube length, number of tubes, tube center distance, heat water mass in tank, collector area, angle between tubes and ground and final temperature. Our results show that this software (on both personal computer and Android platforms) is efficient and convenient to predict the heat collection rate and heat loss coefficient due to it slow root mean square errors in prediction. The software now can be downloaded from http://t.cn/RLPKF08. PMID:26624613

  3. Artificial Neural Networks-Based Software for Measuring Heat Collection Rate and Heat Loss Coefficient of Water-in-Glass Evacuated Tube Solar Water Heaters.

    PubMed

    Liu, Zhijian; Liu, Kejun; Li, Hao; Zhang, Xinyu; Jin, Guangya; Cheng, Kewei

    2015-01-01

    Measurements of heat collection rate and heat loss coefficient are crucial for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, conventional measurement requires expensive detection devices and undergoes a series of complicated procedures. To simplify the measurement and reduce the cost, software based on artificial neural networks for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters was developed. Using multilayer feed-forward neural networks with back-propagation algorithm, we developed and tested our program on the basis of 915 measured samples of water-in-glass evacuated tube solar water heaters. This artificial neural networks-based software program automatically obtained accurate heat collection rate and heat loss coefficient using simply "portable test instruments" acquired parameters, including tube length, number of tubes, tube center distance, heat water mass in tank, collector area, angle between tubes and ground and final temperature. Our results show that this software (on both personal computer and Android platforms) is efficient and convenient to predict the heat collection rate and heat loss coefficient due to it slow root mean square errors in prediction. The software now can be downloaded from http://t.cn/RLPKF08.

  4. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  5. New Design for Rapid Prototyping of Digital Master Casts for Multiple Dental Implant Restorations

    PubMed Central

    Romero, Luis; Jiménez, Mariano; Espinosa, María del Mar; Domínguez, Manuel

    2015-01-01

    Aim This study proposes the replacement of all the physical devices used in the manufacturing of conventional prostheses through the use of digital tools, such as 3D scanners, CAD design software, 3D implants files, rapid prototyping machines or reverse engineering software, in order to develop laboratory work models from which to finish coatings for dental prostheses. Different types of dental prosthetic structures are used, which were adjusted by a non-rotatory threaded fixing system. Method From a digital process, the relative positions of dental implants, soft tissue and adjacent teeth of edentulous or partially edentulous patients has been captured, and a maser working model which accurately replicates data relating to the patients oral cavity has been through treatment of three-dimensional digital data. Results Compared with the conventional master cast, the results show a significant cost savings in attachments, as well as an increase in the quality of reproduction and accuracy of the master cast, with the consequent reduction in the number of patient consultation visits. The combination of software and hardware three-dimensional tools allows the optimization of the planning of dental implant-supported rehabilitations protocol, improving the predictability of clinical treatments and the production cost savings of master casts for restorations upon implants. PMID:26696528

  6. New Design for Rapid Prototyping of Digital Master Casts for Multiple Dental Implant Restorations.

    PubMed

    Romero, Luis; Jiménez, Mariano; Espinosa, María Del Mar; Domínguez, Manuel

    2015-01-01

    This study proposes the replacement of all the physical devices used in the manufacturing of conventional prostheses through the use of digital tools, such as 3D scanners, CAD design software, 3D implants files, rapid prototyping machines or reverse engineering software, in order to develop laboratory work models from which to finish coatings for dental prostheses. Different types of dental prosthetic structures are used, which were adjusted by a non-rotatory threaded fixing system. From a digital process, the relative positions of dental implants, soft tissue and adjacent teeth of edentulous or partially edentulous patients has been captured, and a maser working model which accurately replicates data relating to the patients oral cavity has been through treatment of three-dimensional digital data. Compared with the conventional master cast, the results show a significant cost savings in attachments, as well as an increase in the quality of reproduction and accuracy of the master cast, with the consequent reduction in the number of patient consultation visits. The combination of software and hardware three-dimensional tools allows the optimization of the planning of dental implant-supported rehabilitations protocol, improving the predictability of clinical treatments and the production cost savings of master casts for restorations upon implants.

  7. Modular preoperative planning software for computer-aided oral implantology and the application of a novel stereolithographic template: a pilot study.

    PubMed

    Chen, Xiaojun; Yuan, Jianbing; Wang, Chengtao; Huang, Yuanliang; Kang, Lu

    2010-09-01

    In the field of oral implantology, there is a trend toward computer-aided implant surgery, especially the application of computerized tomography (CT)-derived surgical templates. However, because of relatively unsatisfactory match between the templates and receptor sites, conventional surgical templates may not be accurate enough for the severely resorbed edentulous cases during the procedure of transferring the preoperative plan to the actual surgery. The purpose of this study is to introduce a novel bone-tooth-combined-supported surgical guide, which is designed by utilizing a special modular software and fabricated via stereolithography technique using both laser scanning and CT imaging, thus improving the fit accuracy and reliability. A modular preoperative planning software was developed for computer-aided oral implantology. With the introduction of dynamic link libraries and some well-known free, open-source software libraries such as Visualization Toolkit (Kitware, Inc., New York, USA) and Insight Toolkit (Kitware, Inc.) a plug-in evolutive software architecture was established, allowing for expandability, accessibility, and maintainability in our system. To provide a link between the preoperative plan and the actual surgery, a novel bone-tooth-combined-supported surgical template was fabricated, utilizing laser scanning, image registration, and rapid prototyping. Clinical studies were conducted on four partially edentulous cases to make a comparison with the conventional bone-supported templates. The fixation was more stable than tooth-supported templates because laser scanning technology obtained detailed dentition information, which brought about the unique topography between the match surface of the templates and the adjacent teeth. The average distance deviations at the coronal and apical point of the implant were 0.66 mm (range: 0.3-1.2) and 0.86 mm (range: 0.4-1.2), and the average angle deviation was 1.84 degrees (range: 0.6-2.8 degrees ). This pilot study proves that the novel combined-supported templates are superior to the conventional ones. However, more clinical cases will be conducted to demonstrate their feasibility and reliability.

  8. Sensor control of robot arc welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1985-01-01

    A basic problem in the application of robots for welding which is how to guide a torch along a weld seam using sensory information was studied. Improvement of the quality and consistency of certain Gas Tungsten Arc welds on the Space Shuttle Main Engine (SSME) that are too complex geometrically for conventional automation and therefore are done by hand was examined. The particular problems associated with space shuttle main egnine (SSME) manufacturing and weld-seam tracking with an emphasis on computer vision methods were analyzed. Special interface software for the MINC computr are developed which will allow it to be used both as a test system to check out the robot interface software and later as a development tool for further investigation of sensory systems to be incorporated in welding procedures.

  9. HOMER® Energy Modeling Software V2.63

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2003-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  10. HOMER® Energy Modeling Software V2.64

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2003-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  11. HOMER® Energy Modeling Software V2.65

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2008-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  12. HOMER® Energy Modeling Software V2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2003-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  13. HOMER® Energy Modeling Software V2.19

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2008-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  14. HOMER® Energy Modeling Software V2.67

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Tom

    2008-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  15. Software Reviews. PC Software for Artificial Intelligence Applications.

    ERIC Educational Resources Information Center

    Epp, Helmut; And Others

    1988-01-01

    Contrasts artificial intelligence and conventional programming languages. Reviews Personal Consultant Plus, Smalltalk/V, and Nexpert Object, which are PC-based products inspired by problem-solving paradigms. Provides information on background and operation of each. (RT)

  16. More flexibility in representing geometric distortion in astronomical images

    NASA Astrophysics Data System (ADS)

    Shupe, David L.; Laher, Russ R.; Storrie-Lombardi, Lisa; Surace, Jason; Grillmair, Carl; Levitan, David; Sesar, Branimir

    2012-09-01

    A number of popular software tools in the public domain are used by astronomers, professional and amateur alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net, perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to the complexity of these distortion representations, it was very difficult to use the output of one of these packages as input to the other. New Python software, along with faster-computing C-language translations, have been developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP followed by image visualization with tools that support SIP distortion, but not PV . The software has been incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS images with headers containing both distortion representations. The software permits the conversion of archived images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV distortion convention.

  17. High-throughput state-machine replication using software transactional memory.

    PubMed

    Zhao, Wenbing; Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2016-11-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload.

  18. High-throughput state-machine replication using software transactional memory

    PubMed Central

    Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2017-01-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload. PMID:29075049

  19. Open source tools for ATR development and performance evaluation

    NASA Astrophysics Data System (ADS)

    Baumann, James M.; Dilsavor, Ronald L.; Stubbles, James; Mossing, John C.

    2002-07-01

    Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.

  20. Web accessibility and open source software.

    PubMed

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  1. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  2. InterProScan 5: genome-scale protein function classification

    PubMed Central

    Jones, Philip; Binns, David; Chang, Hsin-Yu; Fraser, Matthew; Li, Weizhong; McAnulla, Craig; McWilliam, Hamish; Maslen, John; Mitchell, Alex; Nuka, Gift; Pesseat, Sebastien; Quinn, Antony F.; Sangrador-Vegas, Amaia; Scheremetjew, Maxim; Yong, Siew-Yit; Lopez, Rodrigo; Hunter, Sarah

    2014-01-01

    Motivation: Robust large-scale sequence analysis is a major challenge in modern genomic science, where biologists are frequently trying to characterize many millions of sequences. Here, we describe a new Java-based architecture for the widely used protein function prediction software package InterProScan. Developments include improvements and additions to the outputs of the software and the complete reimplementation of the software framework, resulting in a flexible and stable system that is able to use both multiprocessor machines and/or conventional clusters to achieve scalable distributed data analysis. InterProScan is freely available for download from the EMBl-EBI FTP site and the open source code is hosted at Google Code. Availability and implementation: InterProScan is distributed via FTP at ftp://ftp.ebi.ac.uk/pub/software/unix/iprscan/5/ and the source code is available from http://code.google.com/p/interproscan/. Contact: http://www.ebi.ac.uk/support or interhelp@ebi.ac.uk or mitchell@ebi.ac.uk PMID:24451626

  3. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XMLmore » files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.« less

  4. A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.

    PubMed

    Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu

    2015-01-01

    X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  6. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  7. Chips: A Tool for Developing Software Interfaces Interactively.

    DTIC Science & Technology

    1987-10-01

    of the application through the objects on the screen. Chips makes this easy by supplying simple and direct access to the source code and data ...object-oriented programming, user interface management systems, programming environments. Typographic Conventions Technical terms appearing in the...creating an environment in which we could do our work. This project could not have happened without him. Jeff Bonar started and managed the Chips

  8. Space station advanced automation

    NASA Technical Reports Server (NTRS)

    Woods, Donald

    1990-01-01

    In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software.

  9. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4

    PubMed Central

    2012-01-01

    Background Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. Objective We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. Implementation In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. Results The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. Conclusions The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers. PMID:23046606

  10. A Systematic Process for Developing High Quality SaaS Cloud Services

    NASA Astrophysics Data System (ADS)

    La, Hyun Jung; Kim, Soo Dong

    Software-as-a-Service (SaaS) is a type of cloud service which provides software functionality through Internet. Its benefits are well received in academia and industry. To fully utilize the benefits, there should be effective methodologies to support the development of SaaS services which provide high reusability and applicability. Conventional approaches such as object-oriented methods do not effectively support SaaS-specific engineering activities such as modeling common features, variability, and designing quality services. In this paper, we present a systematic process for developing high quality SaaS and highlight the essentiality of commonality and variability (C&V) modeling to maximize the reusability. We first define criteria for designing the process model and provide a theoretical foundation for SaaS; its meta-model and C&V model. We clarify the notion of commonality and variability in SaaS, and propose a SaaS development process which is accompanied with engineering instructions. Using the proposed process, SaaS services with high quality can be effectively developed.

  11. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  12. Quantitative fluorescence angiography for neurosurgical interventions.

    PubMed

    Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.

  13. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    PubMed

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient between AQCEL and conventional methods were 0.973 and 0.986 for the normal and affected sides at rest, respectively, and 0.977 and 0.984 for the normal and affected sides after ACZ loading, respectively. The quality of images reconstructed using the application software AQCEL were superior to that obtained using conventional method after ACZ loading, and high correlations were shown in quantity at rest and after ACZ loading. This software can be applied to clinical practice and is a useful tool for improvement of reproducibility and throughput.

  14. A VxD-based automatic blending system using multithreaded programming.

    PubMed

    Wang, L; Jiang, X; Chen, Y; Tan, K C

    2004-01-01

    This paper discusses the object-oriented software design for an automatic blending system. By combining the advantages of a programmable logic controller (PLC) and an industrial control PC (ICPC), an automatic blending control system is developed for a chemical plant. The system structure and multithread-based communication approach are first presented in this paper. The overall software design issues, such as system requirements and functionalities, are then discussed in detail. Furthermore, by replacing the conventional dynamic link library (DLL) with virtual X device drivers (VxD's), a practical and cost-effective solution is provided to improve the robustness of the Windows platform-based automatic blending system in small- and medium-sized plants.

  15. Spacecraft command verification: The AI solution

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.

    1990-01-01

    Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.

  16. Discharge measurements using a broad-band acoustic Doppler current profiler

    USGS Publications Warehouse

    Simpson, Michael R.

    2002-01-01

    The measurement of unsteady or tidally affected flow has been a problem faced by hydrologists for many years. Dynamic discharge conditions impose an unreasonably short time constraint on conventional current-meter discharge-measurement methods, which typically last a minimum of 1 hour. Tidally affected discharge can change more than 100 percent during a 10-minute period. Over the years, the U.S. Geological Survey (USGS) has developed moving-boat discharge-measurement techniques that are much faster but less accurate than conventional methods. For a bibliography of conventional moving-boat publications, see Simpson and Oltmann (1993, page 17). The advent of the acoustic Doppler current profiler (ADCP) made possible the development of a discharge-measurement system capable of more accurately measuring unsteady or tidally affected flow. In most cases, an ADCP discharge-measurement system is dramatically faster than conventional discharge-measurement systems, and has comparable or better accuracy. In many cases, an ADCP discharge-measurement system is the only choice for use at a particular measurement site. ADCP systems are not yet ?turnkey;? they are still under development, and for proper operation, require a significant amount of operator training. Not only must the operator have a rudimentary knowledge of acoustic physics, but also a working knowledge of ADCP operation, the manufacturer's discharge-measurement software, and boating techniques and safety.

  17. Colonoscopy tutorial software made with a cadaver's sectioned images.

    PubMed

    Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo

    2016-11-01

    Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.

  18. Closing the Certification Gaps in Adaptive Flight Control Software

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2008-01-01

    Over the last five decades, extensive research has been performed to design and develop adaptive control systems for aerospace systems and other applications where the capability to change controller behavior at different operating conditions is highly desirable. Although adaptive flight control has been partially implemented through the use of gain-scheduled control, truly adaptive control systems using learning algorithms and on-line system identification methods have not seen commercial deployment. The reason is that the certification process for adaptive flight control software for use in national air space has not yet been decided. The purpose of this paper is to examine the gaps between the state-of-the-art methodologies used to certify conventional (i.e., non-adaptive) flight control system software and what will likely to be needed to satisfy FAA airworthiness requirements. These gaps include the lack of a certification plan or process guide, the need to develop verification and validation tools and methodologies to analyze adaptive controller stability and convergence, as well as the development of metrics to evaluate adaptive controller performance at off-nominal flight conditions. This paper presents the major certification gap areas, a description of the current state of the verification methodologies, and what further research efforts will likely be needed to close the gaps remaining in current certification practices. It is envisioned that closing the gap will require certain advances in simulation methods, comprehensive methods to determine learning algorithm stability and convergence rates, the development of performance metrics for adaptive controllers, the application of formal software assurance methods, the application of on-line software monitoring tools for adaptive controller health assessment, and the development of a certification case for adaptive system safety of flight.

  19. Automated Predictive Diagnosis (APD): A 3-tiered shell for building expert systems for automated predictions and decision making

    NASA Technical Reports Server (NTRS)

    Steib, Michael

    1991-01-01

    The APD software features include: On-line help, Three level architecture, (Logic environments, Setup/Application environment, Data environment), Explanation capability, and File handling. The kinds of experimentation and record keeping that leads to effective expert systems is facilitated by: (1) a library of inferencing modules (in the logic environment); (2) an explanation capability which reveals logic strategies to users; (3) automated file naming conventions; (4) an information retrieval system; and (5) on-line help. These aid with effective use of knowledge, debugging and experimentation. Since the APD software anticipates the logical rules becoming complicated, it is embedded in a production system language (CLIPS) to insure the full power of the production system paradigm of CLIPS and availability of the procedural language C. The development is discussed of the APD software and three example applications: toy, experimental, and operational prototype for submarine maintenance predictions.

  20. Storage system software solutions for high-end user needs

    NASA Technical Reports Server (NTRS)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  1. The theory of interface slicing

    NASA Technical Reports Server (NTRS)

    Beck, Jon

    1993-01-01

    Interface slicing is a new tool which was developed to facilitate reuse-based software engineering, by addressing the following problems, needs, and issues: (1) size of systems incorporating reused modules; (2) knowledge requirements for program modification; (3) program understanding for reverse engineering; (4) module granularity and domain management; and (5) time and space complexity of conventional slicing. The definition of a form of static program analysis called interface slicing is addressed.

  2. Interactive Structure (EUCLID) For Static And Dynamic Representation Of Human Body

    NASA Astrophysics Data System (ADS)

    Renaud, Ch.; Steck, R.

    1983-07-01

    A specific software (EUCLID) for static and dynamic representation of human models is described. The data processing system is connected with ERGODATA and used in interactive mode by intrinsic or specific functions. More or less complex representations in 3-D view of models of the human body are developed. Biostereometric and conventional anthropometric raw data from the data bank are processed for different applications in ergonomy.

  3. Software-assisted post-interventional assessment of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Geisler, Benjamin; Bruners, Philipp; Isfort, Peter; Na, Hong-Sik; Mahnken, Andreas H.; Hahn, Horst K.

    2014-03-01

    Radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. Due to its common technical procedure, low complication rate, and low cost, RFA has become an alternative to surgical resection in the liver. To evaluate the therapy success of RFA, thorough follow-up imaging is essential. Conventionally, shape, size, and position of tumor and coagulation are visually compared in a side-by-side manner using pre- and post-interventional images. To objectify the verification of the treatment success, a novel software assistant allowing for fast and accurate comparison of tumor and coagulation is proposed. In this work, the clinical value of the proposed assessment software is evaluated. In a retrospective clinical study, 39 cases of hepatic tumor ablation are evaluated using the prototype software and conventional image comparison by four radiologists with different levels of experience. The cases are randomized and evaluated in two sessions to avoid any recall-bias. Self-confidence of correct diagnosis (local recurrence vs. no local recurrence) on a six-point scale is given for each case by the radiologists. Sensitivity, specificity, positive and negative predictive values as well as receiver operating curves are calculated for both methods. It is shown that the software-assisted method allows physicians to correctly identify local tumor recurrence with a higher percentage than the conventional method (sensitivity: 0.6 vs. 0.35), whereas the percentage of correctly identified successful ablations is slightly reduced (specificity: 0.83 vs. 0.89).

  4. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  5. Building an infrastructure at PICKSC for the educational use of kinetic software tools

    NASA Astrophysics Data System (ADS)

    Mori, W. B.; Decyk, V. K.; Tableman, A.; Fonseca, R. A.; Tsung, F. S.; Hu, Q.; Winjum, B. J.; Amorim, L. D.; An, W.; Dalichaouch, T. N.; Davidson, A.; Joglekar, A.; Li, F.; May, J.; Touati, M.; Xu, X. L.; Yu, P.

    2016-10-01

    One aim of the Particle-In-Cell and Kinetic Simulation Center (PICKSC) at UCLA is to coordinate a community development of educational software for undergraduate and graduate courses in plasma physics and computer science. The rich array of physical behaviors exhibited by plasmas can be difficult to grasp by students. If they are given the ability to quickly and easily explore plasma physics through kinetic simulations, and to make illustrative visualizations of plasma waves, particle motion in electromagnetic fields, instabilities, or other phenomena, then they can be equipped with first-hand experiences that inform and contextualize conventional texts and lectures. We are developing an infrastructure for any interested persons to take our kinetic codes, run them without any prerequisite knowledge, and explore desired scenarios. Furthermore, we are actively interested in any ideas or input from other plasma physicists. This poster aims to illustrate what we have developed and gather a community of interested users and developers. Supported by NSF under Grant ACI-1339893.

  6. Fault-Tolerant Software-Defined Radio on Manycore

    NASA Technical Reports Server (NTRS)

    Ricketts, Scott

    2015-01-01

    Software-defined radio (SDR) platforms generally rely on field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), but such architectures require significant software development. In addition, application demands for radiation mitigation and fault tolerance exacerbate programming challenges. MaXentric Technologies, LLC, has developed a manycore-based SDR technology that provides 100 times the throughput of conventional radiationhardened general purpose processors. Manycore systems (30-100 cores and beyond) have the potential to provide high processing performance at error rates that are equivalent to current space-deployed uniprocessor systems. MaXentric's innovation is a highly flexible radio, providing over-the-air reconfiguration; adaptability; and uninterrupted, real-time, multimode operation. The technology is also compliant with NASA's Space Telecommunications Radio System (STRS) architecture. In addition to its many uses within NASA communications, the SDR can also serve as a highly programmable research-stage prototyping device for new waveforms and other communications technologies. It can also support noncommunication codes on its multicore processor, collocated with the communications workload-reducing the size, weight, and power of the overall system by aggregating processing jobs to a single board computer.

  7. Development of a Multi-Channel, High Frequency QRS Electrocardiograph

    NASA Technical Reports Server (NTRS)

    DePalma, Jude L.

    2003-01-01

    With the advent of the ISS era and the potential requirement for increased cardiovascular monitoring of crewmembers during extended EVAs, NASA flight surgeons would stand to benefit from an evolving technology that allows for a more rapid diagnosis of myocardial ischemia compared to standard electrocardiography. Similarly, during the astronaut selection process, NASA flight surgeons and other physicians would also stand to benefit from a completely noninvasive technology that, either at rest or during maximal exercise tests, is more sensitive than standard ECG in identifying the presence of ischemia. Perhaps most importantly, practicing cardiologists and emergency medicine physicians could greatly benefit from such a device as it could augment (or even replace) standard electrocardiography in settings where the rapid diagnosis of myocardial ischemia (or the lack thereof) is required for proper clinical decision-making. A multi-channel, high-frequency QRS electrocardiograph is currently under development in the Life Sciences Research Laboratories at JSC. Specifically the project consisted of writing software code, some of which contained specially-designed digital filters, which will be incorporated into an existing commercial software program that is already designed to collect, plot and analyze conventional 12-lead ECG signals on a desktop, portable or palm PC. The software will derive the high-frequency QRS signals, which will be analyzed (in numerous ways) and plotted alongside of the conventional ECG signals, giving the PC-viewing clinician advanced diagnostic information that has never been available previously in all 12 ECG leads simultaneously. After the hardware and software for the advanced digital ECG monitor have been fully integrated, plans are to use the monitor to begin clinical studies both on healthy subjects and on patients with known coronary artery disease in both the outpatient and hospital settings. The ultimate goal is to get the technology out into the clinical world, where it has the potential to save lives.

  8. Extending the Capture Volume of an Iris Recognition System Using Wavefront Coding and Super-Resolution.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen

    2016-12-01

    Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.

  9. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  10. SU-E-T-76: A Software System to Monitor VMAT Plan Complexity in a Large Radiotherapy Centre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, S; Xing, A; Ingham Institute, Sydney, NSW

    2015-06-15

    Purpose: To develop a system that analyses and reports the complexity of Volumetric Modulated Arc Therapy (VMAT) plans to aid in the decision making for streamlining patient specific dosimetric quality assurance (QA) tests. Methods: A software system, Delcheck, was developed in-house to calculate VMAT plan and delivery complexity using the treatment delivery file. Delcheck has the functionality to calculate multiple plan complexity metrics including the Li-Xing Modulation Index (LI-MI), multiplicative combination of Leaf Travel and Modulation Complexity Score (LTMCSv), Monitor Units per prescribed dose (MU/D) and the delivery complexity index (MIt) that incorporates the modulation of dose rate, leaf speedmore » and gantry speed. Delcheck includes database functionality to store and compare plan metrics for a specified treatment site. The overall plan and delivery complexity is assessed based on the 95% conformance limit of the complexity metrics as Similar, More or Less complex. The functionality of the software was tested using 42 prostate conventional, 10 prostate SBRT and 15 prostate bed VMAT plans generated for an Elekta linear accelerator. Results: The mean(σ) of LI-MI for conventional, SBRT and prostate bed plans were 1690(486), 3215.4(1294) and 3258(982) respectively. The LTMCSv of the studied categories were 0.334(0.05), 0.325(0.07) and 0.3112(0.09). The MU/D of the studied categories were 2.4(0.4), 2.7(0.7) and 2.5(0.5). The MIt of the studied categories were 21.6(3.4), 18.2(3.0) and 35.9(6.6). The values of the complexity metrics show that LI-MI appeared to resolve the plan complexity better than LTMCSv and MU/D. The MIt value increased as the delivery complexity increased. Conclusion: The developed software was shown to be working as expected. In studied treatment categories Prostate bed plans are more complex in both plan and delivery and SBRT is more complex in plan and less complex in delivery as demonstrated by LI-MI and MIt. This project was funded through a Cancer Council NSW Project Grant (RG14-11)« less

  11. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Small Business Innovations

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Under an Army Small Business Innovation Research (SBIR) grant, Symbiotics, Inc. developed a software system that permits users to upgrade products from standalone applications so they can communicate in a distributed computing environment. Under a subsequent NASA SBIR grant, Symbiotics added additional tools to the SOCIAL product to enable NASA to coordinate conventional systems for planning Shuttle launch support operations. Using SOCIAL, data may be shared among applications in a computer network even when the applications are written in different programming languages. The product was introduced to the commercial market in 1993 and is used to monitor and control equipment for operation support and to integrate financial networks. The SBIR program was established to increase small business participation in federal R&D activities and to transfer government research to industry. InQuisiX is a reuse library providing high performance classification, cataloging, searching, browsing, retrieval and synthesis capabilities. These form the foundation for software reuse, producing higher quality software at lower cost and in less time. Software Productivity Solutions, Inc. developed the technology under Small Business Innovation Research (SBIR) projects funded by NASA and the Army and is marketing InQuisiX in conjunction with Science Applications International Corporation (SAIC). The SBIR program was established to increase small business participation in federal R&D activities and to transfer government research to industry.

  14. Review: comparison of PET rubidium-82 with conventional SPECT myocardial perfusion imaging

    PubMed Central

    Ghotbi, Adam A; Kjær, Andreas; Hasbak, Philip

    2014-01-01

    Nuclear cardiology has for many years been focused on gamma camera technology. With ever improving cameras and software applications, this modality has developed into an important assessment tool for ischaemic heart disease. However, the development of new perfusion tracers has been scarce. While cardiac positron emission tomography (PET) so far largely has been limited to centres with on-site cyclotron, recent developments with generator produced perfusion tracers such as rubidium-82, as well as an increasing number of PET scanners installed, may enable a larger patient flow that may supersede that of gamma camera myocardial perfusion imaging. PMID:24028171

  15. Finite Element Modelling and Analysis of Conventional Pultrusion Processes

    NASA Astrophysics Data System (ADS)

    Akishin, P.; Barkanov, E.; Bondarchuk, A.

    2015-11-01

    Pultrusion is one of many composite manufacturing techniques and one of the most efficient methods for producing fiber reinforced polymer composite parts with a constant cross-section. Numerical simulation is helpful for understanding the manufacturing process and developing scientific means for the pultrusion tooling design. Numerical technique based on the finite element method has been developed for the simulation of pultrusion processes. It uses the general purpose finite element software ANSYS Mechanical. It is shown that the developed technique predicts the temperature and cure profiles, which are in good agreement with those published in the open literature.

  16. Crosstalk: The Journal of Defense Software Engineering. Volume 22, Number 3

    DTIC Science & Technology

    2009-04-01

    international standard for information security management systems like ISO /IEC 27001 :2005 [1] existed. Since that time, the organization has developed control...of ISO /IEC 27001 and the desire to make decisions based on business value and risk has prompted Ford’s IT Security and Controls organi- zation to begin...their conventional application security operation.u References 1. ISO /IEC 27001 :2005. “Information Technology – Security Techniques – Information

  17. Analysis of Acoustic Depth Sounder Signals with Artificial Neural Networks

    DTIC Science & Technology

    1991-04-01

    battery pack, processor, and mode switches and (2) a stainless steel shaft 1 meter long and 27 millimeters in diameter, containing 8 milliCurie of...returned signal which is not used in conventional depth sounders due to lack of real-time tools for interpreting the 36 information. The shape and...develop some software tools for conducting the research. Commercial programs for neural network implementation were available, but were "black box" in

  18. Expecting the Unexpected: Radiation Hardened Software

    NASA Technical Reports Server (NTRS)

    Penix, John; Mehlitz, Peter C.

    2005-01-01

    Radiation induced Single Event Effects (SEEs) are a serious problem for spacecraft flight software, potentially leading to a complete loss of mission. Conventional risk mitigation has been focused on hardware, leading to slow, expensive and outdated on-board computing devices, increased power consumption and launch mass. Our approach is to look at SEEs from a software perspective, and to explicitly design flight software so that it can detect and correct the majority of SEES. Radiation hardened flight software will reduce the significant residual residual risk for critical missions and flight phases, and enable more use of inexpensive and fast COTS hardware.

  19. Use of Field Programmable Gate Array Technology in Future Space Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Tate, Robert

    2005-01-01

    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.

  20. VLBI Analysis with the Multi-Technique Software GEOSAT

    NASA Technical Reports Server (NTRS)

    Kierulf, Halfdan Pascal; Andersen, Per-Helge; Boeckmann, Sarah; Kristiansen, Oddgeir

    2010-01-01

    GEOSAT is a multi-technique geodetic analysis software developed at Forsvarets Forsknings Institutt (Norwegian defense research establishment). The Norwegian Mapping Authority has now installed the software and has, together with Forsvarets Forsknings Institutt, adapted the software to deliver datum-free normal equation systems in SINEX format. The goal is to be accepted as an IVS Associate Analysis Center and to provide contributions to the IVS EOP combination on a routine basis. GEOSAT is based on an upper diagonal factorized Kalman filter which allows estimation of time variable parameters like the troposphere and clocks as stochastic parameters. The tropospheric delays in various directions are mapped to tropospheric zenith delay using ray-tracing. Meteorological data from ECMWF with a resolution of six hours is used to perform the ray-tracing which depends both on elevation and azimuth. Other models are following the IERS and IVS conventions. The Norwegian Mapping Authority has submitted test SINEX files produced with GEOSAT to IVS. The results have been compared with the existing IVS combined products. In this paper the outcome of these comparisons is presented.

  1. A microkernel design for component-based parallel numerical software systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less

  2. Process and information integration via hypermedia

    NASA Technical Reports Server (NTRS)

    Hammen, David G.; Labasse, Daniel L.; Myers, Robert M.

    1990-01-01

    Success stories for advanced automation prototypes abound in the literature but the deployments of practical large systems are few in number. There are several factors that militate against the maturation of such prototypes into products. Here, the integration of advanced automation software into large systems is discussed. Advanced automation systems tend to be specific applications that need to be integrated and aggregated into larger systems. Systems integration can be achieved by providing expert user-developers with verified tools to efficiently create small systems that interface to large systems through standard interfaces. The use of hypermedia as such a tool in the context of the ground control centers that support Shuttle and space station operations is explored. Hypermedia can be an integrating platform for data, conventional software, and advanced automation software, enabling data integration through the display of diverse types of information and through the creation of associative links between chunks of information. Further, hypermedia enables process integration through graphical invoking of system functions. Through analysis and examples, researchers illustrate how diverse information and processing paradigms can be integrated into a single software platform.

  3. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  4. [Interaction between clinical and research towards venture business].

    PubMed

    Sumida, Iori

    2014-01-01

    The author as a medical physicist has supported multiple institutions where the advanced radiation therapies as well as the conventional radiation therapy have been performed. Since the advanced radiation treatment techniques have spread rapidly, the quality assurance (QA) has been more important and complex that results in the increase of QA items. In order to maintain the quality of radiation therapy as accurate as possible, the efficient and objective approach for performing QA should be important. Author has developed some QA software which has solved those approaches based on the experiment. In this paper the background in multiple institutions as a view point of radiation treatment situation is presented and what author contributes to those institutions by a medical physics support is shown, finally it is considered that how the developed software has spread in Japan and used for many institutions via venture business.

  5. Wearable and low-stress ambulatory blood pressure monitoring technology for hypertension diagnosis.

    PubMed

    Altintas, Ersin; Takoh, Kimiyasu; Ohno, Yuji; Abe, Katsumi; Akagawa, Takeshi; Ariyama, Tetsuri; Kubo, Masahiro; Tsuda, Kenichiro; Tochikubo, Osamu

    2015-01-01

    We propose a highly wearable, upper-arm type, oscillometric-based blood pressure monitoring technology with low-stress. The low-stress is realized by new developments in the hardware and software design. In the hardware design, conventional armband; cuff, is almost halved in volume thanks to a flexible plastic core and a liquid bag which enhances the fitness and pressure uniformity over the arm. Reduced air bag volume enables smaller motor pump size and battery leading to a thinner, more compact and more wearable unified device. In the software design, a new prediction algorithm enabled to apply less stress (and less pain) on arm of the patient. Proof-of-concept experiments on volunteers show a high accuracy on both technologies. This paper mainly introduces hardware developments. The system is promising for less-painful and less-stressful 24-hour blood pressure monitoring in hypertension managements and related healthcare solutions.

  6. Pressure distribution under flexible polishing tools. I - Conventional aspheric optics

    NASA Astrophysics Data System (ADS)

    Mehta, Pravin K.; Hufnagel, Robert E.

    1990-10-01

    The paper presents a mathematical model, based on Kirchoff's thin flat plate theory, developed to determine polishing pressure distribution for a flexible polishing tool. A two-layered tool in which bending and compressive stiffnesses are equal is developed, which is formulated as a plate on a linearly elastic foundation. An equivalent eigenvalue problem and solution for a free-free plate are created from the plate formulation. For aspheric, anamorphic optical surfaces, the tool misfit is derived; it is defined as the result of movement from the initial perfect fit on the optic to any other position. The Polisher Design (POD) software for circular tools on aspheric optics is introduced. NASTRAN-based finite element analysis results are compared with the POD software, showing high correlation. By employing existing free-free eigenvalues and eigenfunctions, the work may be extended to rectangular polishing tools as well.

  7. Enhancing instruction scheduling with a block-structured ISA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melvin, S.; Patt, Y.

    It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less

  8. How Is Open Source Special?

    ERIC Educational Resources Information Center

    Kapor, Mitchell

    2005-01-01

    Open source software projects involve the production of goods, but in software projects, the "goods" consist of information. The open source model is an alternative to the conventional centralized, command-and-control way in which things are usually made. In contrast, open source projects are genuinely decentralized and transparent. Transparent…

  9. Randomized controlled within-subject evaluation of digital and conventional workflows for the fabrication of lithium disilicate single crowns. Part II: CAD-CAM versus conventional laboratory procedures.

    PubMed

    Sailer, Irena; Benic, Goran I; Fehmer, Vincent; Hämmerle, Christoph H F; Mühlemann, Sven

    2017-07-01

    Clinical studies are needed to evaluate the entire digital and conventional workflows in prosthetic dentistry. The purpose of the second part of this clinical study was to compare the laboratory production time for tooth-supported single crowns made with 4 different digital workflows and 1 conventional workflow and to compare these crowns clinically. For each of 10 participants, a monolithic crown was fabricated in lithium disilicate-reinforced glass ceramic (IPS e.max CAD). The computer-aided design and computer-aided manufacturing (CAD-CAM) systems were Lava C.O.S. CAD software and centralized CAM (group L), Cares CAD software and centralized CAM (group iT), Cerec Connect CAD software and lab side CAM (group CiL), and Cerec Connect CAD software with centralized CAM (group CiD). The conventional fabrication (group K) included a wax pattern of the crown and heat pressing according to the lost-wax technique (IPS e.max Press). The time for the fabrication of the casts and the crowns was recorded. Subsequently, the crowns were clinically evaluated and the corresponding treatment times were recorded. The Paired Wilcoxon test with the Bonferroni correction was applied to detect differences among treatment groups (α=.05). The total mean (±standard deviation) active working time for the dental technician was 88 ±6 minutes in group L, 74 ±12 minutes in group iT, 74 ±5 minutes in group CiL, 92 ±8 minutes in group CiD, and 148 ±11 minutes in group K. The dental technician spent significantly more working time for the conventional workflow than for the digital workflows (P<.001). No statistically significant differences were found between group L and group CiD or between group iT and group CiL. No statistical differences in time for the clinical evaluation were found among groups, indicating similar outcomes (P>.05). Irrespective of the CAD-CAM system, the overall laboratory working time for a digital workflow was significantly shorter than for the conventional workflow, since the dental technician needed less active working time. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. Accessing microfluidics through feature-based design software for 3D printing.

    PubMed

    Shankles, Peter G; Millet, Larry J; Aufrecht, Jayde A; Retterer, Scott T

    2018-01-01

    Additive manufacturing has been a cornerstone of the product development pipeline for decades, playing an essential role in the creation of both functional and cosmetic prototypes. In recent years, the prospects for distributed and open source manufacturing have grown tremendously. This growth has been enabled by an expanding library of printable materials, low-cost printers, and communities dedicated to platform development. The microfluidics community has embraced this opportunity to integrate 3D printing into the suite of manufacturing strategies used to create novel fluidic architectures. The rapid turnaround time and low cost to implement these strategies in the lab makes 3D printing an attractive alternative to conventional micro- and nanofabrication techniques. In this work, the production of multiple microfluidic architectures using a hybrid 3D printing-soft lithography approach is demonstrated and shown to enable rapid device fabrication with channel dimensions that take advantage of laminar flow characteristics. The fabrication process outlined here is underpinned by the implementation of custom design software with an integrated slicer program that replaces less intuitive computer aided design and slicer software tools. Devices are designed in the program by assembling parameterized microfluidic building blocks. The fabrication process and flow control within 3D printed devices were demonstrated with a gradient generator and two droplet generator designs. Precise control over the printing process allowed 3D microfluidics to be printed in a single step by extruding bridge structures to 'jump-over' channels in the same plane. This strategy was shown to integrate with conventional nanofabrication strategies to simplify the operation of a platform that incorporates both nanoscale features and 3D printed microfluidics.

  11. Accessing microfluidics through feature-based design software for 3D printing

    PubMed Central

    Shankles, Peter G.; Millet, Larry J.; Aufrecht, Jayde A.

    2018-01-01

    Additive manufacturing has been a cornerstone of the product development pipeline for decades, playing an essential role in the creation of both functional and cosmetic prototypes. In recent years, the prospects for distributed and open source manufacturing have grown tremendously. This growth has been enabled by an expanding library of printable materials, low-cost printers, and communities dedicated to platform development. The microfluidics community has embraced this opportunity to integrate 3D printing into the suite of manufacturing strategies used to create novel fluidic architectures. The rapid turnaround time and low cost to implement these strategies in the lab makes 3D printing an attractive alternative to conventional micro- and nanofabrication techniques. In this work, the production of multiple microfluidic architectures using a hybrid 3D printing-soft lithography approach is demonstrated and shown to enable rapid device fabrication with channel dimensions that take advantage of laminar flow characteristics. The fabrication process outlined here is underpinned by the implementation of custom design software with an integrated slicer program that replaces less intuitive computer aided design and slicer software tools. Devices are designed in the program by assembling parameterized microfluidic building blocks. The fabrication process and flow control within 3D printed devices were demonstrated with a gradient generator and two droplet generator designs. Precise control over the printing process allowed 3D microfluidics to be printed in a single step by extruding bridge structures to ‘jump-over’ channels in the same plane. This strategy was shown to integrate with conventional nanofabrication strategies to simplify the operation of a platform that incorporates both nanoscale features and 3D printed microfluidics. PMID:29596418

  12. Critical Thinking Skills of Students through Mathematics Learning with ASSURE Model Assisted by Software Autograph

    NASA Astrophysics Data System (ADS)

    Kristianti, Y.; Prabawanto, S.; Suhendra, S.

    2017-09-01

    This study aims to examine the ability of critical thinking and students who attain learning mathematics with learning model ASSURE assisted Autograph software. The design of this study was experimental group with pre-test and post-test control group. The experimental group obtained a mathematics learning with ASSURE-assisted model Autograph software and the control group acquired the mathematics learning with the conventional model. The data are obtained from the research results through critical thinking skills tests. This research was conducted at junior high school level with research population in one of junior high school student in Subang Regency of Lesson Year 2016/2017 and research sample of class VIII student in one of junior high school in Subang Regency for 2 classes. Analysis of research data is administered quantitatively. Quantitative data analysis was performed on the normalized gain level between the two sample groups using a one-way anova test. The results show that mathematics learning with ASSURE assisted model Autograph software can improve the critical thinking ability of junior high school students. Mathematical learning using ASSURE-assisted model Autograph software is significantly better in improving the critical thinking skills of junior high school students compared with conventional models.

  13. Communicating and visualizing data quality through Web Map Services

    NASA Astrophysics Data System (ADS)

    Roberts, Charles; Blower, Jon; Maso, Joan; Diaz, Daniel; Griffiths, Guy; Lewis, Jane

    2014-05-01

    The sharing and visualization of environmental data through OGC Web Map Services is becoming increasingly common. However, information about the quality of data is rarely presented. (In this presentation we consider mostly data uncertainty as a measure of quality, although we acknowledge that many other quality measures are relevant to the geoscience community.) In the context of the GeoViQua project (http://www.geoviqua.org) we have developed conventions and tools for using WMS to deliver data quality information. The "WMS-Q" convention describes how the WMS specification can be used to publish quality information at the level of datasets, variables and individual pixels (samples). WMS-Q requires no extensions to the WMS 1.3.0 specification, being entirely backward-compatible. (An earlier version of WMS-Q was published as OGC Engineering Report 12-160.) To complement the WMS-Q convention, we have also developed extensions to the OGC Symbology Encoding (SE) specification, enabling uncertain geoscience data to be portrayed using a variety of visualization techniques. These include contours, stippling, blackening, whitening, opacity, bivariate colour maps, confidence interval triangles and glyphs. There may also be more extensive applications of these methods beyond the visual representation of uncertainty. In this presentation we will briefly describe the scope of the WMS-Q and "extended SE" specifications and then demonstrate the innovations using open-source software based upon ncWMS (http://ncwms.sf.net). We apply the tools to a variety of datasets including Earth Observation data from the European Space Agency's Climate Change Initiative. The software allows uncertain raster data to be shared through Web Map Services, giving the user fine control over data visualization.

  14. Noble-TLBO MPPT Technique and its Comparative Analysis with Conventional methods implemented on Solar Photo Voltaic System

    NASA Astrophysics Data System (ADS)

    Patsariya, Ajay; Rai, Shiwani; Kumar, Yogendra, Dr.; Kirar, Mukesh, Dr.

    2017-08-01

    The energy crisis particularly with developing GDPs, has bring up to a new panorama of sustainable power source like solar energy, which has encountered huge development. Progressively high infiltration level of photovoltaic (PV) era emerges in keen matrix. Sunlight based power is irregular and variable, as the sun based source at the ground level is exceedingly subject to overcast cover inconstancy, environmental vaporized levels, and other climate parameters. The inalienable inconstancy of substantial scale sun based era acquaints huge difficulties with keen lattice vitality administration. Exact determining of sun powered power/irradiance is basic to secure financial operation of the shrewd framework. In this paper a noble TLBO-MPPT technique has been proposed to address the vitality of solar energy. A comparative analysis has been presented between conventional PO, IC and the proposed MPPT technique. The research has been done on Matlab Simulink software version 2013.

  15. A review of cellphone microscopy for disease detection.

    PubMed

    Dendere, R; Myburg, N; Douglas, T S

    2015-12-01

    The expansion in global cellphone network coverage coupled with advances in cellphone imaging capabilities present an opportunity for the advancement of cellphone microscopy as a low-cost alternative to conventional microscopy for disease detection in resource-limited regions. The development of cellphone microscopy has also benefitted from the availability of low-cost miniature microscope components such as low-power light-emitting diodes and ball lenses. As a result, researchers are developing hardware and software techniques that would enable such microscopes to produce high-resolution, diagnostic-quality images. This approach may lead to more widespread delivery of diagnostic services in resource-limited areas where there is a shortage of the skilled labour required for conventional microscopy and where prevalence of infectious and other diseases is still high. In this paper, we review current techniques, clinical applications and challenges faced in the field of cellphone microscopy. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  16. Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)

    1998-01-01

    The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.

  17. Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.

    PubMed

    Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray

    2003-07-01

    The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.

  18. CORMIX2: AN EXPERT SYSTEM FOR HYDRODYNAMIC MIXING ZONE ANALYSIS OF CONVENTIONAL AND TOXIC MULTIPORT DIFFUSER DISCHARGES

    EPA Science Inventory

    CORMIX is a series of software systems for the analysis, prediction, and design of aqueous toxic or conventional pollutant discharges into watercourses, with emphasis on the geometry and dilution characteristics of the initial mixing zone. ubsystem CORMIX1 deals with submerged si...

  19. SEPAC flight software detailed design specifications, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The detailed design specifications (as built) for the SEPAC Flight Software are defined. The design includes a description of the total software system and of each individual module within the system. The design specifications describe the decomposition of the software system into its major components. The system structure is expressed in the following forms: the control-flow hierarchy of the system, the data-flow structure of the system, the task hierarchy, the memory structure, and the software to hardware configuration mapping. The component design description includes details on the following elements: register conventions, module (subroutines) invocaton, module functions, interrupt servicing, data definitions, and database structure.

  20. An architectural approach to create self organizing control systems for practical autonomous robots

    NASA Technical Reports Server (NTRS)

    Greiner, Helen

    1991-01-01

    For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.

  1. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  2. Impact of computer-based treatment planning software on clinical judgment of dental students for planning prosthodontic rehabilitation

    PubMed Central

    Deshpande, Saee; Chahande, Jayashree

    2014-01-01

    Purpose Successful prosthodontic rehabilitation involves making many interrelated clinical decisions which have an impact on each other. Self-directed computer-based training has been shown to be a very useful tool to develop synthetic and analytical problem-solving skills among students. Thus, a computer-based case study and treatment planning (CSTP) software program was developed which would allow students to work through the process of comprehensive, multidisciplinary treatment planning for patients in a structured and logical manner. The present study was aimed at assessing the effect of this CSTP software on the clinical judgment of dental students while planning prosthodontic rehabilitation and to assess the students’ perceptions about using the program for its intended use. Methods A CSTP software program was developed and validated. The impact of this program on the clinical decision making skills of dental graduates was evaluated by real life patient encounters, using a modified and validated mini-CEX. Students’ perceptions about the program were obtained by a pre-validated feedback questionnaire. Results The faculty assessment scores of clinical judgment improved significantly after the use of this program. The majority of students felt it was an informative, useful, and innovative way of learning and they strongly felt that they had learnt the logical progression of planning, the insight into decision making, and the need for flexibility in treatment planning after using this program. Conclusion CSTP software was well received by the students. There was significant improvement in students’ clinical judgment after using this program. It should thus be envisaged fundamentally as an adjunct to conventional teaching techniques to improve students’ decision making skills and confidence. PMID:25170288

  3. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Harnett, N; Department of Radiation Oncology, University of Toronto, Toronto, Ontario

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in amore » professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.« less

  4. Intelligent fault management for the Space Station active thermal control system

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Faltisco, Robert M.

    1992-01-01

    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.

  5. SDR/STRS Flight Experiment and the Role of SDR-Based Communication and Navigation Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2008-01-01

    This presentation describes an open architecture SDR (software defined radio) infrastructure, suitable for space-based radios and operations, entitled Space Telecommunications Radio System (STRS). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, minimizing the impact of parts obsolescence, improved interoperability, and software re-use. To advance the SDR architecture technology and demonstrate its applicability in space, NASA is developing a space experiment of multiple SDRs each with various waveforms to communicate with NASA s TDRSS satellite and ground networks, and the GPS constellation. An experiments program will investigate S-band and Ka-band communications, navigation, and networking technologies and operations.

  6. Semi-automatic computerized approach to radiological quantification in rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Steiner, Wolfgang; Schoeffmann, Sylvia; Prommegger, Andrea; Boegl, Karl; Klinger, Thomas; Peloschek, Philipp; Kainberger, Franz

    2004-04-01

    Rheumatoid Arthritis (RA) is a common systemic disease predominantly involving the joints. Precise diagnosis and follow-up therapy requires objective quantification. For this purpose, radiological analyses using standardized scoring systems are considered to be the most appropriate method. The aim of our study is to develop a semi-automatic image analysis software, especially applicable for scoring of joints in rheumatic disorders. The X-Ray RheumaCoach software delivers various scoring systems (Larsen-Score and Ratingen-Rau-Score) which can be applied by the scorer. In addition to the qualitative assessment of joints performed by the radiologist, a semi-automatic image analysis for joint detection and measurements of bone diameters and swollen tissue supports the image assessment process. More than 3000 radiographs from hands and feet of more than 200 RA patients were collected, analyzed, and statistically evaluated. Radiographs were quantified using conventional paper-based Larsen score and the X-Ray RheumaCoach software. The use of the software shortened the scoring time by about 25 percent and reduced the rate of erroneous scorings in all our studies. Compared to paper-based scoring methods, the X-Ray RheumaCoach software offers several advantages: (i) Structured data analysis and input that minimizes variance by standardization, (ii) faster and more precise calculation of sum scores and indices, (iii) permanent data storing and fast access to the software"s database, (iv) the possibility of cross-calculation to other scores, (v) semi-automatic assessment of images, and (vii) reliable documentation of results in the form of graphical printouts.

  7. Software Products

    NASA Technical Reports Server (NTRS)

    1993-01-01

    MAST is a decision support system to help in the management of dairy herds. Data is collected on dairy herds around the country and processed at regional centers. One center is Cornell University, where Dr. Lawrence Jones and his team developed MAST. The system draws conclusions from the data and summarizes it graphically. CLIPS, which is embedded in MAST, gives the system the ability to make decisions without user interaction. With this technique, dairy managers can identify herd problems quickly, resulting in improved animal health and higher milk quality. CLIPS (C Language Integrated Production System) was developed by NASA's Johnson Space Center. It is a shell for developing expert systems designed to permit research, development and delivery on conventional computers.

  8. A software tool of digital tomosynthesis application for patient positioning in radiotherapy.

    PubMed

    Yan, Hui; Dai, Jian-Rong

    2016-03-08

    Digital Tomosynthesis (DTS) is an image modality in reconstructing tomographic images from two-dimensional kV projections covering a narrow scan angles. Comparing with conventional cone-beam CT (CBCT), it requires less time and radiation dose in data acquisition. It is feasible to apply this technique in patient positioning in radiotherapy. To facilitate its clinical application, a software tool was developed and the reconstruction processes were accelerated by graphic process-ing unit (GPU). Two reconstruction and two registration processes are required for DTS application which is different from conventional CBCT application which requires one image reconstruction process and one image registration process. The reconstruction stage consists of productions of two types of DTS. One type of DTS is reconstructed from cone-beam (CB) projections covering a narrow scan angle and is named onboard DTS (ODTS), which represents the real patient position in treatment room. Another type of DTS is reconstructed from digitally reconstructed radiography (DRR) and is named reference DTS (RDTS), which represents the ideal patient position in treatment room. Prior to the reconstruction of RDTS, The DRRs are reconstructed from planning CT using the same acquisition setting of CB projections. The registration stage consists of two matching processes between ODTS and RDTS. The target shift in lateral and longitudinal axes are obtained from the matching between ODTS and RDTS in coronal view, while the target shift in longitudinal and vertical axes are obtained from the matching between ODTS and RDTS in sagittal view. In this software, both DRR and DTS reconstruction algorithms were implemented on GPU environments for acceleration purpose. The comprehensive evaluation of this software tool was performed including geometric accuracy, image quality, registration accuracy, and reconstruction efficiency. The average correlation coefficient between DRR/DTS generated by GPU-based algorithm and CPU-based algorithm is 0.99. Based on the measurements of cube phantom on DTS, the geometric errors are within 0.5 mm in three axes. For both cube phantom and pelvic phantom, the registration errors are within 0.5 mm in three axes. Compared with reconstruction performance of CPU-based algorithms, the performances of DRR and DTS reconstructions are improved by a factor of 15 to 20. A GPU-based software tool was developed for DTS application for patient positioning of radiotherapy. The geometric and registration accuracy met the clinical requirement in patient setup of radiotherapy. The high performance of DRR and DTS reconstruction algorithms was achieved by the GPU-based computation environments. It is a useful software tool for researcher and clinician in evaluating DTS application in patient positioning of radiotherapy.

  9. The GEMPAK Barnes objective analysis scheme

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Desjardins, M.; Kocin, P. J.

    1981-01-01

    GEMPAK, an interactive computer software system developed for the purpose of assimilating, analyzing, and displaying various conventional and satellite meteorological data types is discussed. The objective map analysis scheme possesses certain characteristics that allowed it to be adapted to meet the analysis needs GEMPAK. Those characteristics and the specific adaptation of the scheme to GEMPAK are described. A step-by-step guide for using the GEMPAK Barnes scheme on an interactive computer (in real time) to analyze various types of meteorological datasets is also presented.

  10. Freeform Optics: current challenges for future serial production

    NASA Astrophysics Data System (ADS)

    Schindler, C.; Köhler, T.; Roth, E.

    2017-10-01

    One of the major developments in optics industry recently is the commercial manufacturing of freeform surfaces for optical mid- and high performance systems. The loss of limitation on rotational symmetry enables completely new optical design solutions - but causes completely new challenges for the manufacturer too. Adapting the serial production from radial-symmetric to freeform optics cannot be done just by the extension of machine capabilities and software for every process step. New solutions for conventional optics productions or completely new process chains are necessary.

  11. Combining real-time monitoring and knowledge-based analysis in MARVEL

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Quan, A. G.; Angelino, R.; Veregge, J. R.

    1993-01-01

    Real-time artificial intelligence is gaining increasing attention for applications in which conventional software methods are unable to meet technology needs. One such application area is the monitoring and analysis of complex systems. MARVEL, a distributed monitoring and analysis tool with multiple expert systems, was developed and successfully applied to the automation of interplanetary spacecraft operations at NASA's Jet Propulsion Laboratory. MARVEL implementation and verification approaches, the MARVEL architecture, and the specific benefits that were realized by using MARVEL in operations are described.

  12. Final Report - Regulatory Considerations for Adaptive Systems

    NASA Technical Reports Server (NTRS)

    Wilkinson, Chris; Lynch, Jonathan; Bharadwaj, Raj

    2013-01-01

    This report documents the findings of a preliminary research study into new approaches to the software design assurance of adaptive systems. We suggest a methodology to overcome the software validation and verification difficulties posed by the underlying assumption of non-adaptive software in the requirementsbased- testing verification methods in RTCA/DO-178B and C. An analysis of the relevant RTCA/DO-178B and C objectives is presented showing the reasons for the difficulties that arise in showing satisfaction of the objectives and suggested additional means by which they could be satisfied. We suggest that the software design assurance problem for adaptive systems is principally one of developing correct and complete high level requirements and system level constraints that define the necessary system functional and safety properties to assure the safe use of adaptive systems. We show how analytical techniques such as model based design, mathematical modeling and formal or formal-like methods can be used to both validate the high level functional and safety requirements, establish necessary constraints and provide the verification evidence for the satisfaction of requirements and constraints that supplements conventional testing. Finally the report identifies the follow-on research topics needed to implement this methodology.

  13. Software For Design Of Life-Support Systems

    NASA Technical Reports Server (NTRS)

    Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.

    1991-01-01

    Design Assistant Workstation (DAWN) computer program is prototype of expert software system for analysis and design of regenerative, physical/chemical life-support systems that revitalize air, reclaim water, produce food, and treat waste. Incorporates both conventional software for quantitative mathematical modeling of physical, chemical, and biological processes and expert system offering user stored knowledge about materials and processes. Constructs task tree as it leads user through simulated process, offers alternatives, and indicates where alternative not feasible. Also enables user to jump from one design level to another.

  14. ANOPP programming and documentation standards document

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Standards defining the requirements for preparing software for the Aircraft Noise Prediction Program (ANOPP) were given. It is the intent of these standards to provide definition, design, coding, and documentation criteria for the achievement of a unity among ANOPP products. These standards apply to all of ANOPP's standard software system. The standards encompass philosophy as well as techniques and conventions.

  15. Digital Modeling in Design Foundation Coursework: An Exploratory Study of the Effectiveness of Conceptual Design Software

    ERIC Educational Resources Information Center

    Guidera, Stan; MacPherson, D. Scot

    2008-01-01

    This paper presents the results of a study that was conducted to identify and document student perceptions of the effectiveness of computer modeling software introduced in a design foundations course that had previously utilized only conventional manually-produced representation techniques. Rather than attempt to utilize a production-oriented CAD…

  16. Googling DNA sequences on the World Wide Web.

    PubMed

    Hajibabaei, Mehrdad; Singer, Gregory A C

    2009-11-10

    New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.

  17. Design and implementation of a sigma delta technology based pulse oximeter's acquisition stage

    NASA Astrophysics Data System (ADS)

    Rossi, E. E.; Peñalva, A.; Schaumburg, F.

    2011-12-01

    Pulse oximetry is a widely used tool in medical practice for estimating patient's fraction of hemoglobin bonded to oxygen. Conventional oximetry presents limitations when changes in the baseline, or low amplitude of signals involved occur. The aim of this paper is to simultaneously solve these constraints and to simplify the circuitry needed, by using ΣΔ technology. For this purpose, a board for the acquisition of the needed signals was developed, together with a PC managed software which controls it, and displays and processes in real time the information acquired. Also laboratory and field tests where designed and executed to verify the performance of this equipment in adverse situations. A simple, robust and economic instrument was achieved, capable of obtaining signals even in situations where conventional oximetry fails.

  18. Common Data Models and Efficient Reproducible Workflows for Distributed Ocean Model Skill Assessment

    NASA Astrophysics Data System (ADS)

    Signell, R. P.; Snowden, D. P.; Howlett, E.; Fernandes, F. A.

    2014-12-01

    Model skill assessment requires discovery, access, analysis, and visualization of information from both sensors and models, and traditionally has been possible only by a few experts. The US Integrated Ocean Observing System (US-IOOS) consists of 17 Federal Agencies and 11 Regional Associations that produce data from various sensors and numerical models; exactly the information required for model skill assessment. US-IOOS is seeking to develop documented skill assessment workflows that are standardized, efficient, and reproducible so that a much wider community can participate in the use and assessment of model results. Standardization requires common data models for observational and model data. US-IOOS relies on the CF Conventions for observations and structured grid data, and on the UGRID Conventions for unstructured (e.g. triangular) grid data. This allows applications to obtain only the data they require in a uniform and parsimonious way using web services: OPeNDAP for model output and OGC Sensor Observation Service (SOS) for observed data. Reproducibility is enabled with IPython Notebooks shared on GitHub (http://github.com/ioos). These capture the entire skill assessment workflow, including user input, search, access, analysis, and visualization, ensuring that workflows are self-documenting and reproducible by anyone, using free software. Python packages for common data models are Pyugrid and the British Met Office Iris package. Python packages required to run the workflows (pyugrid, pyoos, and the British Met Office Iris package) are also available on GitHub and on Binstar.org so that users can run scenarios using the free Anaconda Python distribution. Hosted services such as Wakari enable anyone to reproduce these workflows for free, without installing any software locally, using just their web browser. We are also experimenting with Wakari Enterprise, which allows multi-user access from a web browser to an IPython Server running where large quantities of model output reside, increasing the efficiency. The open development and distribution of these workflows, and the software on which they depend, is an educational resource for those new to the field and a center of focus where practitioners can contribute new software and ideas.

  19. Simple and efficient method for region of interest value extraction from picture archiving and communication system viewer with optical character recognition software and macro program.

    PubMed

    Lee, Young Han; Park, Eun Hae; Suh, Jin-Suck

    2015-01-01

    The objectives are: 1) to introduce a simple and efficient method for extracting region of interest (ROI) values from a Picture Archiving and Communication System (PACS) viewer using optical character recognition (OCR) software and a macro program, and 2) to evaluate the accuracy of this method with a PACS workstation. This module was designed to extract the ROI values on the images of the PACS, and created as a development tool by using open-source OCR software and an open-source macro program. The principal processes are as follows: (1) capture a region of the ROI values as a graphic file for OCR, (2) recognize the text from the captured image by OCR software, (3) perform error-correction, (4) extract the values including area, average, standard deviation, max, and min values from the text, (5) reformat the values into temporary strings with tabs, and (6) paste the temporary strings into the spreadsheet. This principal process was repeated for the number of ROIs. The accuracy of this module was evaluated on 1040 recognitions from 280 randomly selected ROIs of the magnetic resonance images. The input times of ROIs were compared between conventional manual method and this extraction module-assisted input method. The module for extracting ROI values operated successfully using the OCR and macro programs. The values of the area, average, standard deviation, maximum, and minimum could be recognized and error-corrected with AutoHotkey-coded module. The average input times using the conventional method and the proposed module-assisted method were 34.97 seconds and 7.87 seconds, respectively. A simple and efficient method for ROI value extraction was developed with open-source OCR and a macro program. Accurate inputs of various numbers from ROIs can be extracted with this module. The proposed module could be applied to the next generation of PACS or existing PACS that have not yet been upgraded. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  20. Generation of intra-oral-like images from cone beam computed tomography volumes for dental forensic image comparison.

    PubMed

    Trochesset, Denise A; Serchuk, Richard B; Colosi, Dan C

    2014-03-01

    Identification of unknown individuals using dental comparison is well established in the forensic setting. The identification technique can be time and resource consuming if many individuals need to be identified at once. Medical CT (MDCT) for dental profiling has had limited success, mostly due to artifact from metal-containing dental restorations and implants. The authors describe a CBCT reformatting technique that creates images, which closely approximate conventional dental images. Using a i-CAT Platinum CBCT unit and standard issue i-CAT Vision software, a protocol is developed to reproducibly and reliably reformat CBCT volumes. The reformatted images are presented with conventional digital images from the same anatomic area for comparison. The authors conclude that images derived from CBCT volumes following this protocol are similar enough to conventional dental radiographs to allow for dental forensic comparison/identification and that CBCT offers a superior option over MDCT for this purpose. © 2013 American Academy of Forensic Sciences.

  1. Common software and interface for different helmet-mounted display (HMD) aircraft symbology sets

    NASA Astrophysics Data System (ADS)

    Mulholland, Fred F.

    2000-06-01

    Different aircraft in different services and countries have their own set of symbology they want displayed on their HMD. Even as flight symbolgy is standardized, there will still be some differences for types of aircraft, different weapons, different sensors, and different countries. As an HMD supplier, we want to provide a system that can be used across all these applications with no changes in the system, including no changes in the software. This HMD system must also provide the flexibility to accommodate new symbology as it is developed over the years, again, with no change in the HMD software. VSI has developed their HMD software to accommodate F-15, F- 16, F-18, and F-22 symbology sets for the Joint Helmet Mounted Cueing System. It also has the flexibility to accommodate the aircraft types and services of the Joint Strike Fighter: Conventional Takeoff and Landing variant for the USAF, Carrier-based Variant for the USN, and the Short Takeoff and Vertical Landing variant for the USMC and U.K. Royal Navy and Air Force. The key to this flexibility is the interface definition. The interface parameters are established at power-on with the download of an interface definition data set. This data set is used to interpret symbology commands from the aircraft OFP during operation and provide graphic commands to the HMD software. This presentation will define the graphics commands, provide an example of how the interface definition data set is defined, and then show how symbology commands produce a display.

  2. Research interface on a programmable ultrasound scanner.

    PubMed

    Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin

    2008-07-01

    Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.

  3. Exploiting current-generation graphics hardware for synthetic-scene generation

    NASA Astrophysics Data System (ADS)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  4. A flexible tool for diagnosing water, energy, and entropy budgets in climate models

    NASA Astrophysics Data System (ADS)

    Lembo, Valerio; Lucarini, Valerio

    2017-04-01

    We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.

  5. Development of Total Knee Replacement Digital Templating Software

    NASA Astrophysics Data System (ADS)

    Yusof, Siti Fairuz; Sulaiman, Riza; Thian Seng, Lee; Mohd. Kassim, Abdul Yazid; Abdullah, Suhail; Yusof, Shahril; Omar, Masbah; Abdul Hamid, Hamzaini

    In this study, by taking full advantage of digital X-ray and computer technology, we have developed a semi-automated procedure to template knee implants, by making use of digital templating method. Using this approach, a software system called OrthoKneeTMhas been designed and developed. The system is to be utilities as a study in the Department of Orthopaedic and Traumatology in medical faculty, UKM (FPUKM). OrthoKneeTMtemplating process employs uses a technique similar to those used by many surgeons, using acetate templates over X-ray films. Using template technique makes it easy to template various implant from every Implant manufacturers who have with a comprehensive database of templates. The templating functionality includes, template (knee) and manufactures templates (Smith & Nephew; and Zimmer). From an image of patient x-ray OrthoKneeTMtemplates help in quickly and easily reads to the approximate template size needed. The visual templating features then allow us quickly review multiple template sizes against the X-ray and thus obtain the nearly precise view of the implant size required. The system can assist by templating on one patient image and will generate reports that can accompany patient notes. The software system was implemented in Visual basic 6.0 Pro using the object-oriented techniques to manage the graphics and objects. The approaches for image scaling will be discussed. Several of measurement in orthopedic diagnosis process have been studied and added in this software as measurement tools features using mathematic theorem and equations. The study compared the results of the semi-automated (using digital templating) method to the conventional method to demonstrate the accuracy of the system.

  6. System Advisor Model (SAM) General Description (Version 2017.9.5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine M; DiOrio, Nicholas A; Blair, Nathan J

    This document describes the capabilities of the System Advisor Model (SAM) developed and distributed by the U.S. Department of Energy's National Renewable Energy Laboratory. The document is for potential users and others wanting to learn about the model's capabilities. SAM is a techno-economic computer model that calculates performance and financial metrics of renewable energy projects. Project developers, policy makers, equipment manufacturers, and researchers use graphs and tables of SAM results in the process of evaluating financial, technology, and incentive options for renewable energy projects. SAM simulates the performance of photovoltaic, concentrating solar power, solar water heating, wind, geothermal, biomass, andmore » conventional power systems. The financial models are for projects that either buy and sell electricity at retail rates (residential and commercial) or sell electricity at a price determined in a power purchase agreement (PPA). SAM's simulation tools facilitate parametric and sensitivity analyses, Monte Carlo simulation and weather variability (P50/P90) studies. SAM can also read input variables from Microsoft Excel worksheets. For software developers, the SAM software development kit (SDK) makes it possible to use SAM simulation modules in their applications written in C/C plus plus, C sharp, Java, Python, MATLAB, and other languages. NREL provides both SAM and the SDK as free downloads at https://sam.nrel.gov. SAM is an open source project, so its source code is available to the public. Researchers can study the code to understand the model algorithms, and software programmers can contribute their own models and enhancements to the project. Technical support and more information about the software are available on the website.« less

  7. [A personal computer-based system for online monitoring of neurologic intensive care patients].

    PubMed

    Stoll, M; Hamann, G; Jost, V; Schimrigk, K

    1992-03-01

    In the management of neurological intensive care patients with an intracranial space-consuming process the measurement and recording of intracranial pressure together with arterial blood pressure is of special interest. These parameters can be used to monitor the treatment of brain edema and hypertension. Intracranial pressure measurement is also important in the diagnosis of the various subtypes of hydrocephalus. Not only the absolute figures, but also the recognition of specific pressure-patterns is of particular clinical and scientific interest. This new, easily installed and inexpensive system comprises a PC and a conventional monitor, which are connected by an AD-conversion card. Our software, specially developed for this system demonstrates, stores and prints the online-course and the trend of the measurements. In addition it is also possible to view the online-course of conspicuous parts of the trend curve retrospectively and to use these values for statistical analyses. Object-orientated software development techniques were used for flexible graphic output on the screen, printer or to a file. Though developed for this specific purpose, this system is also suitable for recording continuous, longer-term measurements in general.

  8. Modeling and optimization of a hybrid solar combined cycle (HYCS)

    NASA Astrophysics Data System (ADS)

    Eter, Ahmad Adel

    2011-12-01

    The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.

  9. SU-E-T-106: Development of a Collision Prediction Algorithm for Determining Problematic Geometry for SBRT Treatments Using a Stereotactic Body Frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagar, M; Friesen, S; Mannarino, E

    2014-06-01

    Purpose: Collision between the gantry and the couch or patient during Radiotherapy is not a common concern for conventional RT (static fields or arc). With the increase in the application of stereotactic planning techniques to the body, collisions have become a greater concern. Non-coplanar beam geometry is desirable in stereotatic treatments in order to achieve sharp gradients and a high conformality. Non-coplanar geometry is less intuitive in the body and often requires an iterative process of planning and dry runs to guarantee deliverability. Methods: Purpose written software was developed in order to predict the likelihood of collision between the headmore » of the gantry and the couch, patient or stereotatic body frame. Using the DICOM plan and structures set, exported by the treatment planning system, this software is able to predict the possibility of a collision. Given the plan's isocenter, treatment geometry and exterior contours, the software is able to determine if a particular beam/arc is clinically deliverable or if collision is imminent. Results: The software was tested on real world treatment plans with untreatable beam geometry. Both static non-coplanar and VMAT plans were tested. Of these, the collision prediction software could identify all as having potentially problematic geometry. Re-plans of the same cases were also tested and validated as deliverable. Conclusion: This software is capable of giving good initial indication of deliverability for treatment plans that utilize complex geometry (SBRT) or have lateral isocenters. This software is not intended to replace the standard pre-treatment QA dry run. The effectiveness is limited to those portions of the patient and immobilization devices that have been included in the simulation CT and contoured in the planning system. It will however aid the planner in reducing the iterations required to create complex treatment geometries necessary to achieve ideal conformality and organ sparing.« less

  10. Influence of ocean tides on the diurnal and semidiurnal earth rotation variations from VLBI observations

    NASA Astrophysics Data System (ADS)

    Gubanov, V. S.; Kurdubov, S. L.

    2015-05-01

    The International astrogeodetic standard IERS Conventions (2010) contains a model of the diurnal and semidiurnal variations in Earth rotation parameters (ERPs), the pole coordinates and the Universal Time, arising from lunisolar tides in the world ocean. This model was constructed in the mid-1990s through a global analysis of Topex/Poseidon altimetry. The goal of this study is to try to estimate the parameters of this model by processing all the available VLBI observations on a global network of stations over the last 35 years performed within the framework of IVS (International VLBI Service) geodetic programs. The complexity of the problemlies in the fact that the sought-for corrections to the parameters of this model lie within 1 mm and, thus, are at the limit of their detectability by all currently available methods of ground-based positional measurements. This requires applying universal software packages with a high accuracy of reduction calculations and a well-developed system of controlling the simultaneous adjustment of observational data to analyze long series of VLBI observations. This study has been performed with the QUASAR software package developed at the Institute of Applied Astronomy of the Russian Academy of Sciences. Although the results obtained, on the whole, confirm a high accuracy of the basic model in the IERS Conventions (2010), statistically significant corrections that allow this model to be refined have been detected for some harmonics of the ERP variations.

  11. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  12. Computational modeling of drug-resistant bacteria. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, Preston

    2015-03-12

    Initial proposal summary: The evolution of antibiotic-resistant mutants among bacteria (superbugs) is a persistent and growing threat to public health. In many ways, we are engaged in a war with these microorganisms, where the corresponding arms race involves chemical weapons and biological targets. Just as advances in microelectronics, imaging technology and feature recognition software have turned conventional munitions into smart bombs, the long-term objectives of this proposal are to develop highly effective antibiotics using next-generation biomolecular modeling capabilities in tandem with novel subatomic feature detection software. Using model compounds and targets, our design methodology will be validated with correspondingly ultra-highmore » resolution structure-determination methods at premier DOE facilities (single-crystal X-ray diffraction at Argonne National Laboratory, and neutron diffraction at Oak Ridge National Laboratory). The objectives and accomplishments are summarized.« less

  13. Using the iBook in medical education and healthcare settings--the iBook as a reusable learning object; a report of the author's experience using iBooks Author software.

    PubMed

    Payne, Karl Fb; Goodson, Alexander Mc; Tahim, Arpan; Wharrad, Heather J; Fan, Kathleen

    2012-12-01

    The recently launched iBooks 2 from Apple has created a new genre of 'interactive multimedia eBook'. This article aims to dscribe the benefit of the iBook in a medical education and healthcare setting. We discuss the attributes of an iBook as compared with the requirements of the conventional web-based Reusable Learning Object. The structure and user interface within an iBook is highlighted, and the iBook-creating software iBooks Author is discussed in detail. A report of personal experience developing and distributing an iBook for junior trainees in oral and maxillofacial surgery is provided, with discussion of the limitations of this approach and the need for further evidence-based studies.

  14. Relation between Alice Software and Programming Learning: A Systematic Review of the Literature and Meta-Analysis

    ERIC Educational Resources Information Center

    Costa, Joana M.; Miranda, Guilhermina L.

    2017-01-01

    This paper presents the results of a systematic review of the literature, including a meta-analysis, about the effectiveness of the use of Alice software in programming learning when compared to the use of a conventional programming language. Our research included studies published between the years 2000 and 2014 in the main databases. We gathered…

  15. Autonomous Performance Monitoring System: Monitoring and Self-Tuning (MAST)

    NASA Technical Reports Server (NTRS)

    Peterson, Chariya; Ziyad, Nigel A.

    2000-01-01

    Maintaining the long-term performance of software onboard a spacecraft can be a major factor in the cost of operations. In particular, the task of controlling and maintaining a future mission of distributed spacecraft will undoubtedly pose a great challenge, since the complexity of multiple spacecraft flying in formation grows rapidly as the number of spacecraft in the formation increases. Eventually, new approaches will be required in developing viable control systems that can handle the complexity of the data and that are flexible, reliable and efficient. In this paper we propose a methodology that aims to maintain the accuracy of flight software, while reducing the computational complexity of software tuning tasks. The proposed Monitoring and Self-Tuning (MAST) method consists of two parts: a flight software monitoring algorithm and a tuning algorithm. The dependency on the software being monitored is mostly contained in the monitoring process, while the tuning process is a generic algorithm independent of the detailed knowledge on the software. This architecture will enable MAST to be applicable to different onboard software controlling various dynamics of the spacecraft, such as attitude self-calibration, and formation control. An advantage of MAST over conventional techniques such as filter or batch least square is that the tuning algorithm uses machine learning approach to handle uncertainty in the problem domain, resulting in reducing over all computational complexity. The underlying concept of this technique is a reinforcement learning scheme based on cumulative probability generated by the historical performance of the system. The success of MAST will depend heavily on the reinforcement scheme used in the tuning algorithm, which guarantees the tuning solutions exist.

  16. System Advisor Model, SAM 2014.1.14: General Description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blair, Nate; Dobos, Aron P.; Freeman, Janine

    2014-02-01

    This document describes the capabilities of the U.S. Department of Energy and National Renewable Energy Laboratory's System Advisor Model (SAM), Version 2013.9.20, released on September 9, 2013. SAM is a computer model that calculates performance and financial metrics of renewable energy systems. Project developers, policy makers, equipment manufacturers, and researchers use graphs and tables of SAM results in the process of evaluating financial, technology, and incentive options for renewable energy projects. SAM simulates the performance of photovoltaic, concentrating solar power, solar water heating, wind, geothermal, biomass, and conventional power systems. The financial model can represent financial structures for projects thatmore » either buy and sell electricity at retail rates (residential and commercial) or sell electricity at a price determined in a power purchase agreement (utility). SAM's advanced simulation options facilitate parametric and sensitivity analyses, and statistical analysis capabilities are available for Monte Carlo simulation and weather variability (P50/P90) studies. SAM can also read input variables from Microsoft Excel worksheets. For software developers, the SAM software development kit (SDK) makes it possible to use SAM simulation modules in their applications written in C/C++, C#, Java, Python, and MATLAB. NREL provides both SAM and the SDK as free downloads at http://sam.nrel.gov. Technical support and more information about the software are available on the website.« less

  17. Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA

    PubMed Central

    Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.

    2011-01-01

    A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799

  18. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  19. A Non-linear Lifting Line Model for Design and Analysis of Trochoidal Propulsors

    NASA Astrophysics Data System (ADS)

    Roesler, Bernard; Epps, Brenden

    2014-11-01

    Flapping wing propulsors may increase the propulsive efficiency of large shipping vessels. A comparison of the design of a notional propulsor for a large shipping vessel with (a) a conventional ducted propeller versus (b) a flapping wing propulsor is presented. Calculations for flapping wing propulsors are performed using an open-source MATLAB software suite developed by the authors, CyROD, implementing an unsteady lifting-line model with free vortex wake roll-up to study the non-linear effects of foil-wake, and foil-foil interactions. Improvements to the traditional lifting line theory are made using further discretization of the wake vortex ring spacing near the trailing edge. Considerations of packaging options for a flapping wing propulsor on a large shipping vessel are presented, and compared with those for a conventional ducted propeller.

  20. Open release of the DCA++ project

    NASA Astrophysics Data System (ADS)

    Haehner, Urs; Solca, Raffaele; Staar, Peter; Alvarez, Gonzalo; Maier, Thomas; Summers, Michael; Schulthess, Thomas

    We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA+ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-Tc superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  1. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease.

    PubMed

    Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk

    2016-06-01

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.

  2. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  4. Agent oriented programming

    NASA Technical Reports Server (NTRS)

    Shoham, Yoav

    1994-01-01

    The goal of our research is a methodology for creating robust software in distributed and dynamic environments. The approach taken is to endow software objects with explicit information about one another, to have them interact through a commitment mechanism, and to equip them with a speech-acty communication language. System-level applications include software interoperation and compositionality. A government application of specific interest is an infrastructure for coordination among multiple planners. Daily activity applications include personal software assistants, such as programmable email, scheduling, and new group agents. Research topics include definition of mental state of agents, design of agent languages as well as interpreters for those languages, and mechanisms for coordination within agent societies such as artificial social laws and conventions.

  5. General-Purpose Front End for Real-Time Data Processing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  6. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  7. Predicting tool life in turning operations using neural networks and image processing

    NASA Astrophysics Data System (ADS)

    Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D.

    2018-05-01

    A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.

  8. A study of quantification of aortic compliance in mice using radial acquisition phase contrast MRI

    NASA Astrophysics Data System (ADS)

    Zhao, Xuandong

    Spatiotemporal changes in blood flow velocity measured using Phase contrast Magnetic Resonance Imaging (MRI) can be used to quantify Pulse Wave Velocity (PWV) and Wall Shear Stress (WSS), well known indices of vessel compliance. A study was conducted to measure the PWV in the aortic arch in young healthy children using conventional phase contrast MRI and a post processing algorithm that automatically track the peak velocity in phase contrast images. It is shown that the PWV calculated using peak velocity-time data has less variability compared to that using mean velocity and flow. Conventional MR data acquisition techniques lack both the spatial and temporal resolution needed to accurately calculate PWV and WSS in in vivo studies using transgenic animal models of arterial diseases. Radial k-space acquisition can improve both spatial and temporal resolution. A major part of this thesis was devoted to developing technology for Radial Phase Contrast Magnetic Resonance (RPCMR) cine imaging on a 7 Tesla Animal scanner. A pulse sequence with asymmetric radial k-space acquisition was designed and implemented. Software developed to reconstruct the RPCMR images include gridding, density compensation and centering of k-Space that corrects the image ghosting introduced by hardware response time. Image processing software was developed to automatically segment the vessel lumen and correct for phase offset due to eddy currents. Finally, in vivo and ex vivo aortic compliance measurements were conducted in a well-established mouse model for atherosclerosis: Apolipoprotein E-knockout (ApoE-KO). Using RPCMR technique, a significantly higher PWV value as well as a higher average WSS was detected among 9 months old ApoE-KO mice compare to in wild type mice. A follow up ex-vivo test of tissue elasticity confirmed the impaired distensibility of aortic arteries among ApoE-KO mice.

  9. A software tool of digital tomosynthesis application for patient positioning in radiotherapy

    PubMed Central

    Dai, Jian‐Rong

    2016-01-01

    Digital Tomosynthesis (DTS) is an image modality in reconstructing tomographic images from two‐dimensional kV projections covering a narrow scan angles. Comparing with conventional cone‐beam CT (CBCT), it requires less time and radiation dose in data acquisition. It is feasible to apply this technique in patient positioning in radiotherapy. To facilitate its clinical application, a software tool was developed and the reconstruction processes were accelerated by graphic processing unit (GPU). Two reconstruction and two registration processes are required for DTS application which is different from conventional CBCT application which requires one image reconstruction process and one image registration process. The reconstruction stage consists of productions of two types of DTS. One type of DTS is reconstructed from cone‐beam (CB) projections covering a narrow scan angle and is named onboard DTS (ODTS), which represents the real patient position in treatment room. Another type of DTS is reconstructed from digitally reconstructed radiography (DRR) and is named reference DTS (RDTS), which represents the ideal patient position in treatment room. Prior to the reconstruction of RDTS, The DRRs are reconstructed from planning CT using the same acquisition setting of CB projections. The registration stage consists of two matching processes between ODTS and RDTS. The target shift in lateral and longitudinal axes are obtained from the matching between ODTS and RDTS in coronal view, while the target shift in longitudinal and vertical axes are obtained from the matching between ODTS and RDTS in sagittal view. In this software, both DRR and DTS reconstruction algorithms were implemented on GPU environments for acceleration purpose. The comprehensive evaluation of this software tool was performed including geometric accuracy, image quality, registration accuracy, and reconstruction efficiency. The average correlation coefficient between DRR/DTS generated by GPU‐based algorithm and CPU‐based algorithm is 0.99. Based on the measurements of cube phantom on DTS, the geometric errors are within 0.5 mm in three axes. For both cube phantom and pelvic phantom, the registration errors are within 0.5 mm in three axes. Compared with reconstruction performance of CPU‐based algorithms, the performances of DRR and DTS reconstructions are improved by a factor of 15 to 20. A GPU‐based software tool was developed for DTS application for patient positioning of radiotherapy. The geometric and registration accuracy met the clinical requirement in patient setup of radiotherapy. The high performance of DRR and DTS reconstruction algorithms was achieved by the GPU‐based computation environments. It is a useful software tool for researcher and clinician in evaluating DTS application in patient positioning of radiotherapy. PACS number(s): 87.57.nf PMID:27074482

  10. Experiences in improving the state of the practice in verification and validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; French, Scott W.; Hamilton, David

    1994-01-01

    Knowledge-based systems (KBS's) are in general use in a wide variety of domains, both commercial and government. As reliance on these types of systems grows, the need to assess their quality and validity reaches critical importance. As with any software, the reliability of a KBS can be directly attributed to the application of disciplined programming and testing practices throughout the development life-cycle. However, there are some essential differences between conventional software and KBSs, both in construction and use. The identification of these differences affect the verification and validation (V&V) process and the development of techniques to handle them. The recognition of these differences is the basis of considerable on-going research in this field. For the past three years IBM (Federal Systems Company - Houston) and the Software Technology Branch (STB) of NASA/Johnson Space Center have been working to improve the 'state of the practice' in V&V of Knowledge-based systems. This work was motivated by the need to maintain NASA's ability to produce high quality software while taking advantage of new KBS technology. To date, the primary accomplishment has been the development and teaching of a four-day workshop on KBS V&V. With the hope of improving the impact of these workshops, we also worked directly with NASA KBS projects to employ concepts taught in the workshop. This paper describes two projects that were part of this effort. In addition to describing each project, this paper describes problems encountered and solutions proposed in each case, with particular emphasis on implications for transferring KBS V&V technology beyond the NASA domain.

  11. Automated water analyser computer supported system (AWACSS) Part I: Project objectives, basic technology, immunoassay development, software design and networking.

    PubMed

    Tschmelak, Jens; Proll, Guenther; Riedt, Johannes; Kaiser, Joachim; Kraemmer, Peter; Bárzaga, Luis; Wilkinson, James S; Hua, Ping; Hole, J Patrick; Nudd, Richard; Jackson, Michael; Abuknesha, Ram; Barceló, Damià; Rodriguez-Mozaz, Sara; de Alda, Maria J López; Sacher, Frank; Stien, Jan; Slobodník, Jaroslav; Oswald, Peter; Kozmenko, Helena; Korenková, Eva; Tóthová, Lívia; Krascsenits, Zoltan; Gauglitz, Guenter

    2005-02-15

    A novel analytical system AWACSS (automated water analyser computer-supported system) based on immunochemical technology has been developed that can measure several organic pollutants at low nanogram per litre level in a single few-minutes analysis without any prior sample pre-concentration nor pre-treatment steps. Having in mind actual needs of water-sector managers related to the implementation of the Drinking Water Directive (DWD) (98/83/EC, 1998) and Water Framework Directive WFD (2000/60/EC, 2000), drinking, ground, surface, and waste waters were major media used for the evaluation of the system performance. The instrument was equipped with remote control and surveillance facilities. The system's software allows for the internet-based networking between the measurement and control stations, global management, trend analysis, and early-warning applications. The experience of water laboratories has been utilised at the design of the instrument's hardware and software in order to make the system rugged and user-friendly. Several market surveys were conducted during the project to assess the applicability of the final system. A web-based AWACSS database was created for automated evaluation and storage of the obtained data in a format compatible with major databases of environmental organic pollutants in Europe. This first part article gives the reader an overview of the aims and scope of the AWACSS project as well as details about basic technology, immunoassays, software, and networking developed and utilised within the research project. The second part article reports on the system performance, first real sample measurements, and an international collaborative trial (inter-laboratory tests) to compare the biosensor with conventional anayltical methods.

  12. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  13. Scintillation-Hardened GPS Receiver

    NASA Technical Reports Server (NTRS)

    Stephens, Donald R.

    2015-01-01

    CommLargo, Inc., has developed a scintillation-hardened Global Positioning System (GPS) receiver that improves reliability for low-orbit missions and complies with NASA's Space Telecommunications Radio System (STRS) architecture standards. A software-defined radio (SDR) implementation allows a single hardware element to function as either a conventional radio or as a GPS receiver, providing backup and redundancy for platforms such as the International Space Station (ISS) and high-value remote sensing platforms. The innovation's flexible SDR implementation reduces cost, weight, and power requirements. Scintillation hardening improves mission reliability and variability. In Phase I, CommLargo refactored an open-source GPS software package with Kalman filter-based tracking loops to improve performance during scintillation and also demonstrated improved navigation during a geomagnetic storm. In Phase II, the company generated a new field-programmable gate array (FPGA)-based GPS waveform to demonstrate on NASA's Space Communication and Navigation (SCaN) test bed.

  14. Timing characterization and analysis of the Linux-based, closed loop control computer for the Subaru Telescope laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Dinkins, Matthew; Colley, Stephen

    2008-07-01

    Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.

  15. Finding Multiple Internal Rates of Return for a Project with Non-Conventional Cash Flows: Utilizing Popular Financial/Graphing Calculators and Spreadsheet Software

    ERIC Educational Resources Information Center

    Chen, Jeng-Hong

    2008-01-01

    This study demonstrates that a popular graphing calculator among students, TI-83 Plus, has a powerful function to draw the NPV profile and find the accurate multiple IRRs for a project with non-conventional cash flows. However, finance textbooks or related supplementary materials do not provide students instructions for this part. The detailed…

  16. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  17. Tutorial: Crystal orientations and EBSD — Or which way is up?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britton, T.B., E-mail: b.britton@imperial.ac.uk; Jiang, J.; Guo, Y.

    2016-07-15

    Electron backscatter diffraction (EBSD) is an automated technique that can measure the orientation of crystals in a sample very rapidly. There are many sophisticated software packages that present measured data. Unfortunately, due to crystal symmetry and differences in the set-up of microscope and EBSD software, there may be accuracy issues when linking the crystal orientation to a particular microstructural feature. In this paper we outline a series of conventions used to describe crystal orientations and coordinate systems. These conventions have been used to successfully demonstrate that a consistent frame of reference is used in the sample, unit cell, pole figuremore » and diffraction pattern frames of reference. We establish a coordinate system rooted in measurement of the diffraction pattern and subsequently link this to all other coordinate systems. A fundamental outcome of this analysis is to note that the beamshift coordinate system needs to be precisely defined for consistent 3D microstructure analysis. This is supported through a series of case studies examining particular features of the microscope settings and/or unambiguous crystallographic features. These case studies can be generated easily in most laboratories and represent an opportunity to demonstrate confidence in use of recorded orientation data. Finally, we include a simple software tool, written in both MATLAB® and Python, which the reader can use to compare consistency with their own microscope set-up and which may act as a springboard for further offline analysis. - Highlights: • Presentation of conventions used to describe crystal orientations • Three case studies that outline how conventions are consistent • Demonstrates a pathway for calibration and validation of EBSD based orientation measurements • EBSD computer code supplied for validation by the reader.« less

  18. Automated audiometry using apple iOS-based application technology.

    PubMed

    Foulad, Allen; Bui, Peggy; Djalilian, Hamid

    2013-11-01

    The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.

  19. Advanced Protection & Service Restoration for FREEDM Systems

    NASA Astrophysics Data System (ADS)

    Singh, Urvir

    A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.

  20. Preliminary results from the use of the novel Interactive binocular treatment (I-BiT) system, in the treatment of strabismic and anisometropic amblyopia.

    PubMed

    Waddingham, P E; Butler, T K H; Cobb, S V; Moody, A D R; Comaish, I F; Haworth, S M; Gregson, R M; Ash, I M; Brown, S M; Eastgate, R M; Griffiths, G D

    2006-03-01

    We have developed a novel application of adapted virtual reality (VR) technology, for the binocular treatment of amblyopia. We describe the use of the system in six children. Subjects consisted of three conventional treatment 'failures' and three conventional treatment 'refusers', with a mean age of 6.25 years (5.42-7.75 years). Treatment consisted of watching video clips and playing interactive games with specifically designed software to allow streamed binocular image presentation. Initial vision in the amblyopic eye ranged from 6/12 to 6/120 and post-treatment 6/7.5 to 6/24-1. Total treatment time was a mean of 4.4 h. Five out of six children have shown an improvement in their vision (average increase of 10 letters), including those who had previously failed to comply with conventional occlusion. Improvements in vision were demonstrable within a short period of time, in some children after 1 h of treatment. This system is an exciting and promising application of VR technology as a new treatment for amblyopia.

  1. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms.

    PubMed

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-10-06

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions.

  2. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms

    PubMed Central

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-01-01

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions. PMID:27782058

  3. The Computer-based Health Evaluation Software (CHES): a software for electronic patient-reported outcome monitoring.

    PubMed

    Holzner, Bernhard; Giesinger, Johannes M; Pinggera, Jakob; Zugal, Stefan; Schöpf, Felix; Oberguggenberger, Anne S; Gamper, Eva M; Zabernigg, August; Weber, Barbara; Rumpold, Gerhard

    2012-11-09

    Patient-reported Outcomes (PROs) capturing e.g., quality of life, fatigue, depression, medication side-effects or disease symptoms, have become important outcome parameters in medical research and daily clinical practice. Electronic PRO data capture (ePRO) with software packages to administer questionnaires, storing data, and presenting results has facilitated PRO assessment in hospital settings. Compared to conventional paper-pencil versions of PRO instruments, ePRO is more economical with regard to staff resources and time, and allows immediate presentation of results to the medical staff.The objective of our project was to develop software (CHES - Computer-based Health Evaluation System) for ePRO in hospital settings and at home with a special focus on the presentation of individual patient's results. Following the Extreme Programming development approach architecture was not fixed up-front, but was done in close, continuous collaboration with software end users (medical staff, researchers and patients) to meet their specific demands. Developed features include sophisticated, longitudinal charts linking patients' PRO data to clinical characteristics and to PRO scores from reference populations, a web-interface for questionnaire administration, and a tool for convenient creating and editing of questionnaires. By 2012 CHES has been implemented at various institutions in Austria, Germany, Switzerland, and the UK and about 5000 patients participated in ePRO (with around 15000 assessments in total). Data entry is done by the patients themselves via tablet PCs with a study nurse or an intern approaching patients and supervising questionnaire completion. During the last decade several software packages for ePRO have emerged for different purposes. Whereas commercial products are available primarily for ePRO in clinical trials, academic projects have focused on data collection and presentation in daily clinical practice and on extending cancer registries with PRO data. CHES includes several features facilitating the use of PRO data for individualized medical decision making. With its web-interface it allows ePRO also when patients are home. Thus, it provides complete monitoring of patients'physical and psychosocial symptom burden.

  4. The Computer-based Health Evaluation Software (CHES): a software for electronic patient-reported outcome monitoring

    PubMed Central

    2012-01-01

    Background Patient-reported Outcomes (PROs) capturing e.g., quality of life, fatigue, depression, medication side-effects or disease symptoms, have become important outcome parameters in medical research and daily clinical practice. Electronic PRO data capture (ePRO) with software packages to administer questionnaires, storing data, and presenting results has facilitated PRO assessment in hospital settings. Compared to conventional paper-pencil versions of PRO instruments, ePRO is more economical with regard to staff resources and time, and allows immediate presentation of results to the medical staff. The objective of our project was to develop software (CHES – Computer-based Health Evaluation System) for ePRO in hospital settings and at home with a special focus on the presentation of individual patient’s results. Methods Following the Extreme Programming development approach architecture was not fixed up-front, but was done in close, continuous collaboration with software end users (medical staff, researchers and patients) to meet their specific demands. Developed features include sophisticated, longitudinal charts linking patients’ PRO data to clinical characteristics and to PRO scores from reference populations, a web-interface for questionnaire administration, and a tool for convenient creating and editing of questionnaires. Results By 2012 CHES has been implemented at various institutions in Austria, Germany, Switzerland, and the UK and about 5000 patients participated in ePRO (with around 15000 assessments in total). Data entry is done by the patients themselves via tablet PCs with a study nurse or an intern approaching patients and supervising questionnaire completion. Discussion During the last decade several software packages for ePRO have emerged for different purposes. Whereas commercial products are available primarily for ePRO in clinical trials, academic projects have focused on data collection and presentation in daily clinical practice and on extending cancer registries with PRO data. CHES includes several features facilitating the use of PRO data for individualized medical decision making. With its web-interface it allows ePRO also when patients are home. Thus, it provides complete monitoring of patients‘physical and psychosocial symptom burden. PMID:23140270

  5. Dental Students' Perceptions of Digital and Conventional Impression Techniques: A Randomized Controlled Trial.

    PubMed

    Zitzmann, Nicola U; Kovaltschuk, Irina; Lenherr, Patrik; Dedem, Philipp; Joda, Tim

    2017-10-01

    The aim of this randomized controlled trial was to analyze inexperienced dental students' perceptions of the difficulty and applicability of digital and conventional implant impressions and their preferences including performance. Fifty undergraduate dental students at a dental school in Switzerland were randomly divided into two groups (2×25). Group A first took digital impressions in a standardized phantom model and then conventional impressions, while the procedures were reversed for Group B. Participants were asked to complete a VAS questionnaire (0-100) on the level of difficulty and applicability (user/patient-friendliness) of both techniques. They were asked which technique they preferred and perceived to be more efficient. A quotient of "effective scan time per software-recorded time" (TRIOS) was calculated as an objective quality indicator for intraoral optical scanning (IOS). The majority of students perceived IOS as easier than the conventional technique. Most (72%) preferred the digital approach using IOS to take the implant impression to the conventional method (12%) or had no preference (12%). Although total work was similar for males and females, the TRIOS quotient indicated that male students tended to use their time more efficiently. In this study, dental students with no clinical experience were very capable of acquiring digital tools, indicating that digital impression techniques can be included early in the dental curriculum to help them catch up with ongoing development in computer-assisted technologies used in oral rehabilitation.

  6. The SSM/PMAD automated test bed project

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.

    1991-01-01

    The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).

  7. Technical aspects of telepathology with emphasis on future development.

    PubMed

    Schwarzmann, P; Binder, B; Klose, R

    2000-01-01

    Pathology undergoes presently changes due to new developments in diagnostic opportunities and cost saving efforts in health care. Out of the wide field of telepathology the paper selects three prototype applications: telepathology in teleeducation, expert advice for preselected details of a slide and finally telepathology for remote diagnosis. The most challenging field for remote diagnosis is the application in the frozen section scenario. The paper starts with the mental experiment to map conventional procedures to counterparts in telepathology. Technical opportunities and economical restrictions of telepathology equipment are discussed with respect to the components: electronic camera, display devices, haptic sensors and displays, available telecommunication channels and telepathology software. As an example and for illustration of the state of the art for an advanced telemicroscopy system able to perform remote frozen section diagnosis, the HISTKOM equipment is presented in more details. The section concerning future developments regards the aspects of the acceptance by tentative users, legal aspects, costs and affordability of equipment, the market for equipment components and the adequate telecommunication services. Further is regarded the mutual influence of properties of existing systems and application experiences gained with them on the next generation of equipment and application software. Conclusions and references close the paper.

  8. A Software Architecture for Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

  9. Development of Low-cost plotter for educational purposes using Arduino

    NASA Astrophysics Data System (ADS)

    Karthik, Siriparapu; Thirumal Reddy, Palwai; Marimuthu, K. Prakash

    2017-08-01

    With the development of CAD/CAM/CAE concept to product realization time has reduced drastically. Most of the activities such as design, drafting, and visualizations are carried out using high-end computers and commercial software. This has reduced the overall lead-time to market. It is important in the current scenario to equip the students with knowledge of advanced technological developments in order to use them effectively. However, the cost associated with the systems are very high which is not affordable to students. The present work is an attempt to build a low-cost plotter integrating some of the software that are available and components got from scrapped electronic devices. Here the authors are introducing G-code plotter with 3-axis which can implement the given g-code in 2D plane (X-Y). Lifting pen and adjusting to the base component is in the Z-axis. All conventional plotting devices existing until date are costly and need basic knowledge before operating. Our aim is to make students understand the working of plotter and the usage of G-code, achieving this at a much affordable cost. Arduino Uno controls the stepper motors, which can accurately plot the given dimensions.

  10. Modified virtual reality technology for treatment of amblyopia.

    PubMed

    Eastgate, R M; Griffiths, G D; Waddingham, P E; Moody, A D; Butler, T K H; Cobb, S V; Comaish, I F; Haworth, S M; Gregson, R M; Ash, I M; Brown, S M

    2006-03-01

    The conventional patching/occlusion treatment for amblyopia sometimes gives disappointing results for a number of reasons: it is unpopular, prolonged, frequently resulting in poor or noncompliance, and also disrupts fusion. The aim of this research was to develop a novel virtual-reality (VR)-based display system that facilitates the treatment of amblyopia with both eyes stimulated simultaneously. We have adopted a multidisciplinary approach, combining VR expertise with a team of ophthalmologists and orthoptists to develop the Interactive Binocular Treatment (I-BiT) system. This system incorporates adapted VR technology and specially written software providing interactive 2D and 3D games and videos to the patient via a stereo (binocular) display, and a control screen for the clinician. We developed a prototype research system designed for treatment of amblyopia in children. The result is a novel way to treat amblyopia, which allows binocular treatment. It is interactive, and as it is partially software based, can be adapted to suit the age/ability, and needs of the patient. This means that the treatment can be made captivating and enjoyable. Further research is on-going to determine the efficacy of this new modality in the treatment of amblyopia.

  11. An object-oriented data reduction system in Fortran

    NASA Technical Reports Server (NTRS)

    Bailey, J.

    1992-01-01

    A data reduction system for the AAO two-degree field project is being developed using an object-oriented approach. Rather than use an object-oriented language (such as C++) the system is written in Fortran and makes extensive use of existing subroutine libraries provided by the UK Starlink project. Objects are created using the extensible N-dimensional Data Format (NDF) which itself is based on the Hierarchical Data System (HDS). The software consists of a class library, with each class corresponding to a Fortran subroutine with a standard calling sequence. The methods of the classes provide operations on NDF objects at a similar level of functionality to the applications of conventional data reduction systems. However, because they are provided as callable subroutines, they can be used as building blocks for more specialist applications. The class library is not dependent on a particular software environment thought it can be used effectively in ADAM applications. It can also be used from standalone Fortran programs. It is intended to develop a graphical user interface for use with the class library to form the 2dF data reduction system.

  12. A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System

    NASA Astrophysics Data System (ADS)

    Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.

    2010-05-01

    The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.

  13. MDWiZ: a platform for the automated translation of molecular dynamics simulations.

    PubMed

    Rusu, Victor H; Horta, Vitor A C; Horta, Bruno A C; Lins, Roberto D; Baron, Riccardo

    2014-03-01

    A variety of popular molecular dynamics (MD) simulation packages were independently developed in the last decades to reach diverse scientific goals. However, such non-coordinated development of software, force fields, and analysis tools for molecular simulations gave rise to an array of software formats and arbitrary conventions for routine preparation and analysis of simulation input and output data. Different formats and/or parameter definitions are used at each stage of the modeling process despite largely contain redundant information between alternative software tools. Such Babel of languages that cannot be easily and univocally translated one into another poses one of the major technical obstacles to the preparation, translation, and comparison of molecular simulation data that users face on a daily basis. Here, we present the MDWiZ platform, a freely accessed online portal designed to aid the fast and reliable preparation and conversion of file formats that allows researchers to reproduce or generate data from MD simulations using different setups, including force fields and models with different underlying potential forms. The general structure of MDWiZ is presented, the features of version 1.0 are detailed, and an extensive validation based on GROMACS to LAMMPS conversion is presented. We believe that MDWiZ will be largely useful to the molecular dynamics community. Such fast format and force field exchange for a given system allows tailoring the chosen system to a given computer platform and/or taking advantage of a specific capabilities offered by different software engines. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Development and preliminary evaluation of an ultrasonic motor actuated needle guide for 3T MRI-guided transperineal prostate interventions

    NASA Astrophysics Data System (ADS)

    Song, Sang-Eun; Tokuda, Junichi; Tuncali, Kemal; Tempany, Clare; Hata, Nobuhiko

    2012-02-01

    Image guided prostate interventions have been accelerated by Magnetic Resonance Imaging (MRI) and robotic technologies in the past few years. However, transrectal ultrasound (TRUS) guided procedure still remains as vast majority in clinical practice due to engineering and clinical complexity of the MRI-guided robotic interventions. Subsequently, great advantages and increasing availability of MRI have not been utilized at its maximum capacity in clinic. To benefit patients from the advantages of MRI, we developed an MRI-compatible motorized needle guide device "Smart Template" that resembles a conventional prostate template to perform MRI-guided prostate interventions with minimal changes in the clinical procedure. The requirements and specifications of the Smart Template were identified from our latest MRI-guided intervention system that has been clinically used in manual mode for prostate biopsy. Smart Template consists of vertical and horizontal crossbars that are driven by two ultrasonic motors via timing-belt and mitergear transmissions. Navigation software that controls the crossbar position to provide needle insertion positions was also developed. The software can be operated independently or interactively with an open-source navigation software, 3D Slicer, that has been developed for prostate intervention. As preliminary evaluation, MRI distortion and SNR test were conducted. Significant MRI distortion was found close to the threaded brass alloy components of the template. However, the affected volume was limited outside the clinical region of interest. SNR values over routine MRI scan sequences for prostate biopsy indicated insignificant image degradation during the presence of the robotic system and actuation of the ultrasonic motors.

  15. Examining Authenticity: An Initial Exploration of the Suitability of Handwritten Electronic Signatures.

    PubMed

    Heckeroth, J; Boywitt, C D

    2017-06-01

    Considering the increasing relevance of handwritten electronically captured signatures, we evaluated the ability of forensic handwriting examiners (FHEs) to distinguish between authentic and simulated electronic signatures. Sixty-six professional FHEs examined the authenticity of electronic signatures captured with software by signotec on a smartphone Galaxy Note 4 by Samsung and signatures made with a ballpoint pen on paper (conventional signatures). In addition, we experimentally varied the name ("J. König" vs. "A. Zaiser") and the status (authentic vs. simulated) of the signatures in question. FHEs' conclusions about the authenticity did not show a statistically significant general difference between electronic and conventional signatures. Furthermore, no significant discrepancies between electronic and conventional signatures were found with regard to other important aspects of the authenticity examination such as questioned signatures' graphic information content, the suitability of the provided sample signatures, the necessity of further examinations and the levels of difficulty of the cases under examination. Thus, this study did not reveal any indications that electronic signatures captured with software by signotec on a Galaxy Note 4 are less well suited than conventional signatures for the examination of authenticity, precluding potential technical problems concerning the integrity of electronic signatures. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Formalizing Space Shuttle Software Requirements

    NASA Technical Reports Server (NTRS)

    Crow, Judith; DiVito, Ben L.

    1996-01-01

    This paper describes two case studies in which requirements for new flight-software subsystems on NASA's Space Shuttle were analyzed, one using standard formal specification techniques, the other using state exploration. These applications serve to illustrate three main theses: (1) formal methods can complement conventional requirements analysis processes effectively, (2) formal methods confer benefits regardless of how extensively they are adopted and applied, and (3) formal methods are most effective when they are judiciously tailored to the application.

  17. Shade matching assisted by digital photography and computer software.

    PubMed

    Schropp, Lars

    2009-04-01

    To evaluate the efficacy of digital photographs and graphic computer software for color matching compared to conventional visual matching. The shade of a tab from a shade guide (Vita 3D-Master Guide) placed in a phantom head was matched to a second guide of the same type by nine observers. This was done for twelve selected shade tabs (tests). The shade-matching procedure was performed visually in a simulated clinic environment and with digital photographs, and the time spent for both procedures was recorded. An alternative arrangement of the shade tabs was used in the digital photographs. In addition, a graphic software program was used for color analysis. Hue, chroma, and lightness values of the test tab and all tabs of the second guide were derived from the digital photographs. According to the CIE L*C*h* color system, the color differences between the test tab and tabs of the second guide were calculated. The shade guide tab that deviated least from the test tab was determined to be the match. Shade matching performance by means of graphic software was compared with the two visual methods and tested by Chi-square tests (alpha= 0.05). Eight of twelve test tabs (67%) were matched correctly by the computer software method. This was significantly better (p < 0.02) than the performance of the visual shade matching methods conducted in the simulated clinic (32% correct match) and with photographs (28% correct match). No correlation between time consumption for the visual shade matching methods and frequency of correct match was observed. Shade matching assisted by digital photographs and computer software was significantly more reliable than by conventional visual methods.

  18. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    PubMed

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  19. Optimised layout and roadway support planning with integrated intelligent software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouniali, S.; Josien, J.P.; Piguet, J.P.

    1996-12-01

    Experience with knowledge-based systems for Layout planning and roadway support dimensioning is on hand in European coal mining since 1985. The systems SOUT (Support choice and dimensioning, 1989), SOUT 2, PLANANK (planning of bolt-support), Exos (layout planning diagnosis. 1994), Sout 3 (1995) have been developed in close cooperation by CdF{sup 1}. INERIS{sup 2} , EMN{sup 3} (France) and RAG{sup 4}, DMT{sup 5}, TH - Aachen{sup 6} (Germany); ISLSP (Integrated Software for Layout and support planning) development is in progress (completion scheduled for July 1996). This new software technology in combination with conventional programming systems, numerical models and existing databases turnedmore » out to be suited for setting-up an intelligent decision aid for layout and roadway support planning. The system enhances reliability of planning and optimises the safety-to-cost ratio for (1) deformation forecast for roadways in seam and surrounding rocks, consideration of the general position of the roadway in the rock mass (zones of increased pressure, position of operating and mined panels); (2) support dimensioning; (3) yielding arches, rigid arches, porch sets, rigid rings, yielding rings and bolting/shotcreting for drifts; (4) yielding arches, rigid arches and porch sets for roadways in seam; and (5) bolt support for gateroads (assessment of exclusion criteria and calculation of the bolting pattern) bolting of face-end zones (feasibility and safety assessment; stability guarantee).« less

  20. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    PubMed

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  1. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    PubMed Central

    2013-01-01

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. Conclusions The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple “Google-style” searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature. PMID:23705910

  2. CFD Approach To Investigate The Flow Characteristics In Bi-Directional Ventilated Disc Brake

    NASA Astrophysics Data System (ADS)

    Munisamy, Kannan M.; Yusoff, Mohd. Zamri; Shuaib, Norshah Hafeez; Thangaraju, Savithry K.

    2010-06-01

    This paper presents experimental and Computational Fluids Dynamics (CFD) investigations of the flow in ventilated brake discs. Development of an experiment rig with basic measuring devices are detailed out and following a validation study, the possible improvement in the brake cooling can be further analyzed using CFD analysis. The mass flow rate is determined from basic flow measurement technique following that the conventional bi-directional passenger car is simulated using commercial CFD software FLUENT™. The CFD simulation is used to investigate the flow characteristics in between blade flow of the bi-directional ventilated disc brake.

  3. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  4. Biomolecular computers with multiple restriction enzymes.

    PubMed

    Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz

    2017-01-01

    The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann "bottleneck". Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro's group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases.

  5. HapHop-Physio: a computer game to support cognitive therapies in children.

    PubMed

    Rico-Olarte, Carolina; López, Diego M; Narváez, Santiago; Farinango, Charic D; Pharow, Peter S

    2017-01-01

    Care and support of children with physical or mental disabilities are accompanied with serious concerns for parents, families, healthcare institutions, schools, and their communities. Recent studies and technological innovations have demonstrated the feasibility of providing therapy and rehabilitation services to children supported by computer games. The aim of this paper is to present HapHop-Physio, an innovative computer game that combines exercise with fun and learning, developed to support cognitive therapies in children. Conventional software engineering methods such as the Scrum methodology, a functionality test and a related usability test, were part of the comprehensive methodology adapted to develop HapHop-Physio. The game supports visual and auditory attention therapies, as well as visual and auditory memory activities. The game was developed by a multidisciplinary team, which was based on the Hopscotch ® platform provided by Fraunhofer Institute for Digital Media Technology IDMT Institute in Germany, and designed in collaboration with a rehabilitation clinic in Colombia. HapHop-Physio was tested and evaluated to probe its functionality and user satisfaction. The results show the development of an easy-to-use and funny game by a multidisciplinary team using state-of-the-art videogame technologies and software methodologies. Children testing the game concluded that they would like to play again while undergoing rehabilitation therapies.

  6. A Generic Modeling Process to Support Functional Fault Model Development

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  7. Using Floquet periodicity to easily calculate dispersion curves and wave structures of homogeneous waveguides

    NASA Astrophysics Data System (ADS)

    Hakoda, Christopher; Rose, Joseph; Shokouhi, Parisa; Lissenden, Clifford

    2018-04-01

    Dispersion curves are essential to any guided-wave-related project. The Semi-Analytical Finite Element (SAFE) method has become the conventional way to compute dispersion curves for homogeneous waveguides. However, only recently has a general SAFE formulation for commercial and open-source software become available, meaning that until now SAFE analyses have been variable and more time consuming than desirable. Likewise, the Floquet boundary conditions enable analysis of waveguides with periodicity and have been an integral part of the development of metamaterials. In fact, we have found the use of Floquet boundary conditions to be an extremely powerful tool for homogeneous waveguides, too. The nuances of using periodic boundary conditions for homogeneous waveguides that do not exhibit periodicity are discussed. Comparisons between this method and SAFE are made for selected homogeneous waveguide applications. The COMSOL Multiphysics software is used for the results shown, but any standard finite element software that can implement Floquet periodicity (user-defined or built-in) should suffice. Finally, we identify a number of complex waveguides for which dispersion curves can be found with relative ease by using the periodicity inherent to the Floquet boundary conditions.

  8. Motions of Celestial Bodies; Computer simulations

    NASA Astrophysics Data System (ADS)

    Butikov, Eugene

    2014-10-01

    This book is written for a wide range of graduate and undergraduate students studying various courses in physics and astronomy. It is accompanied by the award winning educational software package 'Planets and Satellites' developed by the author. This text, together with the interactive software, is intended to help students learn and understand the fundamental concepts and the laws of physics as they apply to the fascinating world of the motions of natural and artificial celestial bodies. The primary aim of the book is the understanding of the foundations of classical and modern physics, while their application to celestial mechanics is used to illustrate these concepts. The simulation programs create vivid and lasting impressions of the investigated phenomena, and provide students and their instructors with a powerful tool which enables them to explore basic concepts that are difficult to study and teach in an abstract conventional manner. Students can work with the text and software at a pace they can enjoy, varying parameters of the simulated systems. Each section of the textbook is supplied with questions, exercises, and problems. Using some of the suggested simulation programs, students have an opportunity to perform interesting mini-research projects in physics and astronomy.

  9. Informed-Proteomics: open-source software package for top-down proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jungkap; Piehowski, Paul D.; Wilkins, Christopher

    Top-down proteomics involves the analysis of intact proteins. This approach is very attractive as it allows for analyzing proteins in their endogenous form without proteolysis, preserving valuable information about post-translation modifications, isoforms, proteolytic processing or their combinations collectively called proteoforms. Moreover, the quality of the top-down LC-MS/MS datasets is rapidly increasing due to advances in the liquid chromatography and mass spectrometry instrumentation and sample processing protocols. However, the top-down mass spectra are substantially more complex compare to the more conventional bottom-up data. To take full advantage of the increasing quality of the top-down LC-MS/MS datasets there is an urgent needmore » to develop algorithms and software tools for confident proteoform identification and quantification. In this study we present a new open source software suite for top-down proteomics analysis consisting of an LC-MS feature finding algorithm, a database search algorithm, and an interactive results viewer. The presented tool along with several other popular tools were evaluated using human-in-mouse xenograft luminal and basal breast tumor samples that are known to have significant differences in protein abundance based on bottom-up analysis.« less

  10. Interaction design challenges and solutions for ALMA operations monitoring and control

    NASA Astrophysics Data System (ADS)

    Pietriga, Emmanuel; Cubaud, Pierre; Schwarz, Joseph; Primet, Romain; Schilling, Marcus; Barkats, Denis; Barrios, Emilio; Vila Vilaro, Baltasar

    2012-09-01

    The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that presents numerous challenges. From a software perspective, one critical issue is the design of graphical user interfaces for operations monitoring and control that scale to the complexity of the system and to the massive amounts of data users are faced with. Early experience operating the telescope with only a few antennas has shown that conventional user interface technologies are not adequate in this context. They consume too much screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load, impeding tasks that call for quick diagnosis and action. To address this challenge, the ALMA software division adopted a user-centered design approach. For the last two years, astronomers, operators, software engineers and human-computer interaction researchers have been involved in participatory design workshops, with the aim of designing better user interfaces based on state-of-the-art visualization techniques. This paper describes the process that led to the development of those interface components and to a proposal for the science and operations console setup: brainstorming sessions, rapid prototyping, joint implementation work involving software engineers and human-computer interaction researchers, feedback collection from a broader range of users, further iterations and testing.

  11. Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package

    NASA Astrophysics Data System (ADS)

    Cheng, L.; AghaKouchak, A.; Gilleland, E.

    2013-12-01

    Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.

  12. Computer software configuration description, 241-AY and 241 AZ tank farm MICON automation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelman, W.D.

    This document describes the configuration process, choices and conventions used during the Micon DCS configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 3 provides additional information on the software used to provide communications with the W-320 project and incorporates minor changes to ensure the document alarm setpoint priorities correctly match operational expectations.

  13. Explain the CERES file naming convention

    Atmospheric Science Data Center

    2014-12-08

    ... using the dataset name, configuration code and date information which make each file name unique. A Dataset name consists ... 6-digit file and software version management code number - 120145 Date in the form YYYYMMDDHH ...

  14. Accuracy of computer-assisted navigation: significant augmentation by facial recognition software.

    PubMed

    Glicksman, Jordan T; Reger, Christine; Parasher, Arjun K; Kennedy, David W

    2017-09-01

    Over the past 20 years, image guidance navigation has been used with increasing frequency as an adjunct during sinus and skull base surgery. These devices commonly utilize surface registration, where varying pressure of the registration probe and loss of contact with the face during the skin tracing process can lead to registration inaccuracies, and the number of registration points incorporated is necessarily limited. The aim of this study was to evaluate the use of novel facial recognition software for image guidance registration. Consecutive adults undergoing endoscopic sinus surgery (ESS) were prospectively studied. Patients underwent image guidance registration via both conventional surface registration and facial recognition software. The accuracy of both registration processes were measured at the head of the middle turbinate (MTH), middle turbinate axilla (MTA), anterior wall of sphenoid sinus (SS), and nasal tip (NT). Forty-five patients were included in this investigation. Facial recognition was accurate to within a mean of 0.47 mm at the MTH, 0.33 mm at the MTA, 0.39 mm at the SS, and 0.36 mm at the NT. Facial recognition was more accurate than surface registration at the MTH by an average of 0.43 mm (p = 0.002), at the MTA by an average of 0.44 mm (p < 0.001), and at the SS by an average of 0.40 mm (p < 0.001). The integration of facial recognition software did not adversely affect registration time. In this prospective study, automated facial recognition software significantly improved the accuracy of image guidance registration when compared to conventional surface registration. © 2017 ARS-AAOA, LLC.

  15. Development of a digital impression procedure using photogrammetry for complete denture fabrication.

    PubMed

    Matsuda, Takashi; Goto, Takaharu; Kurahashi, Kosuke; Kashiwabara, Toshiya; Ichikawa, Tetsuo

    We developed an innovative procedure for digitizing maxillary edentulous residual ridges with a photogrammetric system capable of estimating three-dimensional (3D) digital forms from multiple two-dimensional (2D) digital images. The aim of this study was to validate the effectiveness of the photogrammetric system. Impressions of the maxillary residual ridges of five edentulous patients were taken with four kinds of procedures: three conventional impression procedures and the photogrammetric system. Plaster models were fabricated from conventional impressions and digitized with a 3D scanner. Two 3D forms out of four forms were superimposed with 3D inspection software, and differences were evaluated using a least squares best fit algorithm. The in vitro experiment suggested that better imaging conditions were in the horizontal range of ± 15 degrees and at a vertical angle of 45 degrees. The mean difference between the photogrammetric image (Form A) and the image taken from conventional preliminarily impression (Form C) was 0.52 ± 0.22 mm. The mean difference between the image taken of final impression through a special tray (Form B) and Form C was 0.26 ± 0.06 mm. The mean difference between the image taken from conventional final impression (Form D) and Form C was 0.25 ± 0.07 mm. The difference between Forms A and C was significantly larger than the differences between Forms B and C and between Forms D and C. The results of this study suggest that obtaining digital impressions of edentulous residual ridges using a photogrammetric system is feasible and available for clinical use.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Yasin; Khare, Vaibhav Rai; Mathur, Jyotirmay

    The paper describes a parametric study developed to estimate the energy savings potential of a radiant cooling system installed in a commercial building in India. The study is based on numerical modeling of a radiant cooling system installed in an Information Technology (IT) office building sited in the composite climate of Hyderabad. To evaluate thermal performance and energy consumption, simulations were carried out using the ANSYS FLUENT and EnergyPlus softwares, respectively. The building model was calibrated using the measured data for the installed radiant system. Then this calibrated model was used to simulate the energy consumption of a building usingmore » a conventional all-air system to determine the proportional energy savings. For proper handling of the latent load, a dedicated outside air system (DOAS) was used as an alternative to Fan Coil Unit (FCU). A comparison of energy consumption calculated that the radiant system was 17.5 % more efficient than a conventional all-air system and that a 30% savings was achieved by using a DOAS system compared with a conventional system. Computational Fluid Dynamics (CFD) simulation was performed to evaluate indoor air quality and thermal comfort. It was found that a radiant system offers more uniform temperatures, as well as a better mean air temperature range, than a conventional system. To further enhance the energy savings in the radiant system, different operational strategies were analyzed based on thermal analysis using EnergyPlus. Lastly, the energy savings achieved in this parametric run were more than 10% compared with a conventional all-air system.« less

  17. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  18. Semi-automatic tracking, smoothing and segmentation of hyoid bone motion from videofluoroscopic swallowing study.

    PubMed

    Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong

    2017-01-01

    Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.

  19. An OpenEarth Framework (OEF) for Integrating and Visualizing Earth Science Data

    NASA Astrophysics Data System (ADS)

    Moreland, J. L.; Nadeau, D. R.; Baru, C.; Crosby, C. J.

    2009-12-01

    The integration of data is essential to make transformative progress in understanding the complex processes operating at the Earth’s surface and within its interior. While our current ability to collect massive amounts of data, develop structural models, and generate high-resolution dynamics models is well developed, our ability to quantitatively integrate these data and models into holistic interpretations of Earth systems is poorly developed. We lack the basic tools to realize a first-order goal in Earth science of developing integrated 4D models of Earth structure and processes using a complete range of available constraints, at a time when the research agenda of major efforts such as EarthScope demand such a capability. Among the challenges to 3D data integration are data that may be in different coordinate spaces, units, value ranges, file formats, and data structures. While several file format standards exist, they are infrequently or incorrectly used. Metadata is often missing, misleading, or relegated to README text files along side the data. This leaves much of the work to integrate data bogged down by simple data management tasks. The OpenEarth Framework (OEF) being developed by GEON addresses these data management difficulties. The software incorporates file format parsers, data interpretation heuristics, user interfaces to prompt for missing information, and visualization techniques to merge data into a common visual model. The OEF’s data access libraries parse formal and de facto standard file formats and map their data into a common data model. The software handles file format quirks, storage details, caching, local and remote file access, and web service protocol handling. Heuristics are used to determine coordinate spaces, units, and other key data features. Where multiple data structure, naming, and file organization conventions exist, those heuristics check for each convention’s use to find a high confidence interpretation of the data. When no convention or embedded data yields a suitable answer, the user is prompted to fill in the blanks. The OEF’s interaction libraries assist in the construction of user interfaces for data management. These libraries support data import, data prompting, data introspection, the management of the contents of a common data model, and the creation of derived data to support visualization. Finally, visualization libraries provide interactive visualization using an extended version of NASA WorldWind. The OEF viewer supports visualization of terrains, point clouds, 3D volumes, imagery, cutting planes, isosurfaces, and more. Data may be color coded, shaded, and displayed above, or below the terrain, and always registered into a common coordinate space. The OEF architecture is open and cross-platform software libraries are available separately for use with other software projects, while modules from other projects may be integrated into the OEF to extend its features. The OEF is currently being used to visualize data from EarthScope-related research in the Western US.

  20. A multitasking finite state architecture for computer control of an electric powertrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burba, J.C.

    1984-01-01

    Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less

  1. An integrated computational tool for precipitation simulation

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  2. Software breadboard study

    NASA Technical Reports Server (NTRS)

    Nuckolls, C.; Frank, Mark

    1990-01-01

    The overall goal of this study was to develop new concepts and technology for the Comet Rendezvous Asteroid Flyby (CRAF), Cassini, and other future deep space missions which maximally conform to the Functional Specification for the NASA X-Band Transponder (NXT), FM513778 (preliminary, revised July 26, 1988). The study is composed of two tasks. The first task was to investigate a new digital signal processing technique which involves the processing of 1-bit samples and has the potential for significant size, mass, power, and electrical performance improvements over conventional analog approaches. The entire X-band receiver tracking loop was simulated on a digital computer using a high-level programming language. Simulations on this 'software breadboard' showed the technique to be well-behaved and a good approximation to its analog predecessor from threshold to strong signal levels in terms of tracking-loop performance, command signal-to-noise ratio and ranging signal-to-noise ratio. The successful completion of this task paves the way for building a hardware breadboard, the recommended next step in confirming this approach is ready for incorporation into flight hardware. The second task in this study was to investigate another technique which provides considerable simplification in the synthesis of the receiver first LO over conventional phase-locked multiplier schemes and in this approach, provides down-conversion for an S-band emergency receive mode without the need of an additional LO. The objective of this study was to develop methodology and models to predict the conversion loss, input RF bandwidth, and output RF bandwidth of a series GaAs FET sampling mixer and to breadboard and test a circuit design suitable for the X and S-band down-conversion applications.

  3. Communicating data quality through Web Map Services

    NASA Astrophysics Data System (ADS)

    Blower, Jon; Roberts, Charles; Griffiths, Guy; Lewis, Jane; Yang, Kevin

    2013-04-01

    The sharing and visualization of environmental data through spatial data infrastructures is becoming increasingly common. However, information about the quality of data is frequently unavailable or presented in an inconsistent fashion. ("Data quality" is a phrase with many possible meanings but here we define it as "fitness for purpose" - therefore different users have different notions of what constitutes a "high quality" dataset.) The GeoViQua project (www.geoviqua.org) is developing means for eliciting, formatting, discovering and visualizing quality information using ISO and Open Geospatial Consortium (OGC) standards. Here we describe one aspect of the innovations of the GeoViQua project. In this presentation, we shall demonstrate new developments in using Web Map Services to communicate data quality at the level of datasets, variables and individual samples. We shall outline a new draft set of conventions (known as "WMS-Q"), which describe a set of rules for using WMS to convey quality information (OGC draft Engineering Report 12-160). We shall demonstrate these conventions through new prototype software, based upon the widely-used ncWMS software, that applies these rules to enable the visualization of uncertainties in raster data such as satellite products and the results of numerical simulations. Many conceptual and practical issues have arisen from these experiments. How can source data be formatted so that a WMS implementation can detect the semantic links between variables (e.g. the links between a mean field and its variance)? The visualization of uncertainty can be a complex task - how can we provide users with the power and flexibility to choose an optimal strategy? How can we maintain compatibility (as far as possible) with existing WMS clients? We explore these questions with reference to existing standards and approaches, including UncertML, NetCDF-U and Styled Layer Descriptors.

  4. Simple and versatile modifications allowing time gated spectral acquisition, imaging and lifetime profiling on conventional wide-field microscopes

    NASA Astrophysics Data System (ADS)

    Pal, Robert; Beeby, Andrew

    2014-09-01

    An inverted microscope has been adapted to allow time-gated imaging and spectroscopy to be carried out on samples containing responsive lanthanide probes. The adaptation employs readily available components, including a pulsed light source, time-gated camera, spectrometer and photon counting detector, allowing imaging, emission spectroscopy and lifetime measurements. Each component is controlled by a suite of software written in LabVIEW and is powered via conventional USB ports.

  5. Novel semi-automated kidney volume measurements in autosomal dominant polycystic kidney disease.

    PubMed

    Muto, Satoru; Kawano, Haruna; Isotani, Shuji; Ide, Hisamitsu; Horie, Shigeo

    2018-06-01

    We assessed the effectiveness and convenience of a novel semi-automatic kidney volume (KV) measuring high-speed 3D-image analysis system SYNAPSE VINCENT ® (Fuji Medical Systems, Tokyo, Japan) for autosomal dominant polycystic kidney disease (ADPKD) patients. We developed a novel semi-automated KV measurement software for patients with ADPKD to be included in the imaging analysis software SYNAPSE VINCENT ® . The software extracts renal regions using image recognition software and measures KV (VINCENT KV). The algorithm was designed to work with the manual designation of a long axis of a kidney including cysts. After using the software to assess the predictive accuracy of the VINCENT method, we performed an external validation study and compared accurate KV and ellipsoid KV based on geometric modeling by linear regression analysis and Bland-Altman analysis. Median eGFR was 46.9 ml/min/1.73 m 2 . Median accurate KV, Vincent KV and ellipsoid KV were 627.7, 619.4 ml (IQR 431.5-947.0) and 694.0 ml (IQR 488.1-1107.4), respectively. Compared with ellipsoid KV (r = 0.9504), Vincent KV correlated strongly with accurate KV (r = 0.9968), without systematic underestimation or overestimation (ellipsoid KV; 14.2 ± 22.0%, Vincent KV; - 0.6 ± 6.0%). There were no significant slice thickness-specific differences (p = 0.2980). The VINCENT method is an accurate and convenient semi-automatic method to measure KV in patients with ADPKD compared with the conventional ellipsoid method.

  6. SeeGH--a software tool for visualization of whole genome array comparative genomic hybridization data.

    PubMed

    Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L

    2004-02-09

    Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.

  7. Enzymatic corn wet milling: engineering process and cost model

    PubMed Central

    Ramírez, Edna C; Johnston, David B; McAloon, Andrew J; Singh, Vijay

    2009-01-01

    Background Enzymatic corn wet milling (E-milling) is a process derived from conventional wet milling for the recovery and purification of starch and co-products using proteases to eliminate the need for sulfites and decrease the steeping time. In 2006, the total starch production in USA by conventional wet milling equaled 23 billion kilograms, including modified starches and starches used for sweeteners and ethanol production [1]. Process engineering and cost models for an E-milling process have been developed for a processing plant with a capacity of 2.54 million kg of corn per day (100,000 bu/day). These models are based on the previously published models for a traditional wet milling plant with the same capacity. The E-milling process includes grain cleaning, pretreatment, enzymatic treatment, germ separation and recovery, fiber separation and recovery, gluten separation and recovery and starch separation. Information for the development of the conventional models was obtained from a variety of technical sources including commercial wet milling companies, industry experts and equipment suppliers. Additional information for the present models was obtained from our own experience with the development of the E-milling process and trials in the laboratory and at the pilot plant scale. The models were developed using process and cost simulation software (SuperPro Designer®) and include processing information such as composition and flow rates of the various process streams, descriptions of the various unit operations and detailed breakdowns of the operating and capital cost of the facility. Results Based on the information from the model, we can estimate the cost of production per kilogram of starch using the input prices for corn, enzyme and other wet milling co-products. The work presented here describes the E-milling process and compares the process, the operation and costs with the conventional process. Conclusion The E-milling process was found to be cost competitive with the conventional process during periods of high corn feedstock costs since the enzymatic process enhances the yields of the products in a corn wet milling process. This model is available upon request from the authors for educational, research and non-commercial uses. PMID:19154623

  8. CAD/CAM complete dentures: a review of two commercial fabrication systems.

    PubMed

    Kattadiyil, Mathew T; Goodacre, Charles J; Baba, Nadim Z

    2013-06-01

    The use of computer-aided design and computer-aided manufacturing (CAD/CAM) has become available for complete dentures through the AvaDent and Dentca systems. AvaDent uses laser scanning and computer technology. Teeth are arranged and bases formed using proprietary software.The bases are milled from prepolymerized pucks of resin. Dentca uses computer software to produce virtual maxillary and mandibular edentulous ridges, arrange the teeth and form bases. The dentures are fabricated using a conventional processing technique.

  9. Analysis of geothermal temperatures for heat pumps application in Paraná (Brasil)

    NASA Astrophysics Data System (ADS)

    Santos, Alexandre F.; de Souza, Heraldo J. L.; Cantao, Mauricio P.; Gaspar, Pedro D.

    2016-11-01

    Geothermal heat pumps are broadly used in developed countries but scarcely in Brazil, in part because there is a lack of Brazilian soil temperature data. The aims of this work are: to present soil temperature measurements and to compare geothermal heat pump system performances with conventional air conditioning systems. Geothermal temperature measurement results are shown for ten Paraná State cities, representing different soil and climate conditions. The measurements were made yearlong with calibrated equipment and digital data acquisition system in different measuring stations. Geothermal and ambient temperature data were used for simulations of the coeficient of performance (COP), by means of a working fluid pressure-enthalpy diagram based software for vapor-compression cycle. It was verified that geothermal temperature measured between January 13 to October 13, 2013, varied from 16 to 24 °C, while room temperature has varied between 2 and 35 °C. Average COP values for conventional system were 3.7 (cooling mode) and 5.0 kW/kW (heating mode), corresponding to 5.9 and 7.9 kW/kW for geothermal system. Hence it was verified an average eficiency gain of 59%with geothermal system utilization in comparison with conventional system.

  10. New high-efficiency ion-trap mobility detection system for narcotics

    NASA Astrophysics Data System (ADS)

    McGann, William J.

    1997-02-01

    A new patented Ion Trap Mobility Spectrometer design is presented. Conventional IMS designs typically operate below 0.1 percent efficiency. This is due primarily to electric field driven, sample ion discharge on a shutter grid. Since 99.9 percent of the sample ions generated in the reaction region are lost int his discharge process, the sensitivity of conventional systems is limited. The new design provides greater detection efficiency than conventional designs through the use of an 'ion trap' concept. The paper describes the plasma and sample ion dynamics in the reaction region of the new detector and discusses the advantages of utilizing a 'field-free' space to generate sample ions with high efficiency. Fast electronic switching is described which is used to perturb the field-free space and pulse the sample ions into the drift region for separation and subsequent detection using pseudo real-time software for analysis and display of the data. One application for this new detector is now being developed, a portable, hand-held system with switching capability for the detection of drugs and explosives. Preliminary ion spectra and sensitivity data are presented for cocaine and heroin using a hand sniffer configuration.

  11. Comparative UAV and Field Phenotyping to Assess Yield and Nitrogen Use Efficiency in Hybrid and Conventional Barley.

    PubMed

    Kefauver, Shawn C; Vicente, Rubén; Vergara-Díaz, Omar; Fernandez-Gallego, Jose A; Kerfal, Samir; Lopez, Antonio; Melichar, James P E; Serret Molins, María D; Araus, José L

    2017-01-01

    With the commercialization and increasing availability of Unmanned Aerial Vehicles (UAVs) multiple rotor copters have expanded rapidly in plant phenotyping studies with their ability to provide clear, high resolution images. As such, the traditional bottleneck of plant phenotyping has shifted from data collection to data processing. Fortunately, the necessarily controlled and repetitive design of plant phenotyping allows for the development of semi-automatic computer processing tools that may sufficiently reduce the time spent in data extraction. Here we present a comparison of UAV and field based high throughput plant phenotyping (HTPP) using the free, open-source image analysis software FIJI (Fiji is just ImageJ) using RGB (conventional digital cameras), multispectral and thermal aerial imagery in combination with a matching suite of ground sensors in a study of two hybrids and one conventional barely variety with ten different nitrogen treatments, combining different fertilization levels and application schedules. A detailed correlation network for physiological traits and exploration of the data comparing between treatments and varieties provided insights into crop performance under different management scenarios. Multivariate regression models explained 77.8, 71.6, and 82.7% of the variance in yield from aerial, ground, and combined data sets, respectively.

  12. RSpec: New Real-time Spectroscopy Software Enhances High School and College Learning

    NASA Astrophysics Data System (ADS)

    Field, Tom

    2011-01-01

    Nothing beats hands-on experience! Students often have a more profound learning experience in a hands-on laboratory than in a classroom. However, development of inquiry-based curricula for teaching spectroscopy has been thwarted by the absence of affordable equipment. There is now a software program that brings the excitement of real-time spectroscopy into the lab. It eliminates the processing delays that accompany conventional after-the-fact data analysis -- delays that often result in sagging enthusiasm and loss of interest in young, active minds. RSpec is the ideal software for high school or undergraduate physics classes. It is a state-of-the-art, multi-threaded software program that allows students to observe spectral profile graphs and their colorful synthesized spectra in real-time video. Using an off-the-shelf webcam, DSLR, cooled-CCD or even a cell phone camera, students can now gain hands-on experience in gathering, calibrating, and identifying spectra. Light sources can include the sun, bright night-time astronomical objects, or gas tubes. Students can even build their own spectroscopes using inexpensive diffraction "rainbow” glasses. For more advanced students, the addition of an inexpensive slitless diffraction grating allows the study of even more exciting objects. With a modest 8” telescope, students can use a simple webcam to classify star types, and to detect such exciting phenomena as Neptune's methane-absorption lines, M42's emission lines, and even, believe it or not, the redshift of 3C 273. These adventures are possible even under light-polluted urban skies. RSpec is also an excellent program for amateur astronomers who want to transition from visual CCD imaging to actual scientific data collection and analysis. As the developer of this software, I worked with both teachers and experienced spectroscopists to ensure that it would bring a compelling experience to your students. The response to real-time, colorful data has been very enthusiastic both in the classroom and in public outreach.

  13. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering.

  14. System Advisor Model, SAM 2011.12.2: General Description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilman, P.; Dobos, A.

    2012-02-01

    This document describes the capabilities of the U.S. Department of Energy and National Renewable Energy Laboratory's System Advisor Model (SAM), Version 2011.12.2, released on December 2, 2011. SAM is software that models the cost and performance of renewable energy systems. Project developers, policy makers, equipment manufacturers, and researchers use graphs and tables of SAM results in the process of evaluating financial, technology, and incentive options for renewable energy projects. SAM simulates the performance of solar, wind, geothermal, biomass, and conventional power systems. The financial model can represent financing structures for projects that either buy and sell electricity at retail ratesmore » (residential and commercial) or sell electricity at a price determined in a power purchase agreement (utility). Advanced analysis options facilitate parametric, sensitivity, and statistical analyses, and allow for interfacing SAM with Microsoft Excel or with other computer programs. SAM is available as a free download at http://sam.nrel.gov. Technical support and more information about the software are available on the website.« less

  15. Lunar Applications in Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin

    2008-01-01

    NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.

  16. Multi-hybrid instrumentations with smartphones and smartpads for innovative in-field and POC diagnostics

    NASA Astrophysics Data System (ADS)

    Hofmann, Dietrich; Dittrich, Paul-Gerald; Gärtner, Claudia; Klemm, Richard

    2013-03-01

    Aim of the paper is the orientation of research and development on a completely new approach to innovative in-field and point of care diagnostics in industry, biology and medicine. Central functional modules are smartphones and/or smart pads supplemented by additional hardware apps and software apps. Specific examples are given for numerous practical applications concerning optodigital instrumentations. The methodical classification distinguishes between different levels for combination of hardware apps (hwapps) and software apps (swapps) with smartphones and/or smartpads. These methods are fundamental enablers for the transformation from stationary conventional laboratory diagnostics into mobile innovative in-field and point of care diagnostics. The innovative approach opens so far untapped enormous markets due to the convenience, reliability and affordability of smartphone and/or smartpad instruments. A highly visible advantage of smartphones and/or smartpads is the huge number of their distribution, their worldwide connectivity via cloud services and the experienced capability of their users for practical operations.

  17. Dry wind tunnel system

    NASA Technical Reports Server (NTRS)

    Chen, Ping-Chih (Inventor)

    2013-01-01

    This invention is a ground flutter testing system without a wind tunnel, called Dry Wind Tunnel (DWT) System. The DWT system consists of a Ground Vibration Test (GVT) hardware system, a multiple input multiple output (MIMO) force controller software, and a real-time unsteady aerodynamic force generation software, that is developed from an aerodynamic reduced order model (ROM). The ground flutter test using the DWT System operates on a real structural model, therefore no scaled-down structural model, which is required by the conventional wind tunnel flutter test, is involved. Furthermore, the impact of the structural nonlinearities on the aeroelastic stability can be included automatically. Moreover, the aeroservoelastic characteristics of the aircraft can be easily measured by simply including the flight control system in-the-loop. In addition, the unsteady aerodynamics generated computationally is interference-free from the wind tunnel walls. Finally, the DWT System can be conveniently and inexpensively carried out as a post GVT test with the same hardware, only with some possible rearrangement of the shakers and the inclusion of additional sensors.

  18. Spatiotemporal matrix image formation for programmable ultrasound scanners

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  19. Operative record using intraoperative digital data in neurosurgery.

    PubMed

    Houkin, K; Kuroda, S; Abe, H

    2000-01-01

    The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.

  20. Off-the-shelf real-time monitoring of satellite constellations in a visual 3-D environment

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Hervias, Felipe; Cheng, Cecilia Han; Mactutis, Anthony; Angelino, Robert

    1996-01-01

    The multimission spacecraft analysis system (MSAS) data monitor is a generic software product for future real-time data monitoring and analysis. The system represents the status of a satellite constellation through the shape, color, motion and position of graphical objects floating in a three dimensional virtual reality environment. It may be used for the monitoring of large volumes of data, for viewing results in configurable displays, and for providing high level and detailed views of a constellation of monitored satellites. It is considered that the data monitor is an improvement on conventional graphic and text-based displays as it increases the amount of data that the operator can absorb in a given period, and can be installed and configured without the requirement for software development by the end user. The functionality of the system is described, including: the navigation abilities; the representation of alarms in the cybergrid; limit violation; real-time trend analysis, and alarm status indication.

  1. De-optical-line-terminal hybrid access-aggregation optical network for time-sensitive services based on software-defined networking orchestration

    NASA Astrophysics Data System (ADS)

    Bai, Wei; Yang, Hui; Xiao, Hongyun; Yu, Ao; He, Linkuan; Zhang, Jie; Li, Zhen; Du, Yi

    2017-11-01

    With the increase in varieties of services in network, time-sensitive services (TSSs) appear and bring forward an impending need for delay performance. Ultralow-latency communication has become one of the important development goals for many scenarios in the coming 5G era (e.g., robotics and driverless cars). However, the conventional methods, which decrease delay by promoting the available resources and the network transmission speed, have limited effect; a new breakthrough for ultralow-latency communication is necessary. We propose a de-optical-line-terminal (De-OLT) hybrid access-aggregation optical network (DAON) for TSS based on software-defined networking (SDN) orchestration. In this network, low-latency all-optical communication based on optical burst switching can be achieved by removing OLT. For supporting this network and guaranteeing the quality of service for TSSs, we design SDN-driven control method and service provision method. Numerical results demonstrate the proposed DAON promotes network service efficiency and avoids traffic congestion.

  2. The definitive analysis of the Bendandi's methodology performed with a specific software

    NASA Astrophysics Data System (ADS)

    Ballabene, Adriano; Pescerelli Lagorio, Paola; Georgiadis, Teodoro

    2015-04-01

    The presentation aims to clarify the "Method Bendandi" supposed, in the past, to be able to forecast earthquakes and never let expressly resolved by the geophysicist from Faenza to posterity. The geoethics implications of the Bendandi's forecasts, and those that arise around the speculation of possible earthquakes inferred from suppositories "Bendandiane" methodologies, rose up in previous years caused by social alarms during supposed occurrences of earthquakes which never happened but where widely spread by media following some 'well informed' non conventional scientists. The analysis was conducted through an extensive literature search of the archive 'Raffaele Bendandi' at Geophy sical Observatory of Faenza and the forecasts analyzed utilising a specially developed software, called "Bendandiano Dashboard", that can reproduce the planetary configurations reported in the graphs made by Italian geophysicist. This analysis should serve to clarify 'definitively' what are the basis of the Bendandi's calculations as well as to prevent future unwarranted warnings issued on the basis of supposed prophecies and illusory legacy documents.

  3. CONTIN XPCS: software for inverse transform analysis of X-ray photon correlation spectroscopy dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) reveal materials dynamics using coherent scattering, with XPCS permitting the investigation of dynamics in a more diverse array of materials than DLS. Heterogeneous dynamics occur in many material systems. The authors' recent work has shown how classic tools employed in the DLS analysis of heterogeneous dynamics can be extended to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. The present work describes the software implementation of inverse transform analysis of XPCS data. This software, calledCONTIN XPCS, is an extension of traditionalCONTINanalysis and accommodates the various dynamics encountered inmore » equilibrium XPCS measurements.« less

  4. CONTIN XPCS: software for inverse transform analysis of X-ray photon correlation spectroscopy dynamics

    DOE PAGES

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) reveal materials dynamics using coherent scattering, with XPCS permitting the investigation of dynamics in a more diverse array of materials than DLS. Heterogeneous dynamics occur in many material systems. The authors' recent work has shown how classic tools employed in the DLS analysis of heterogeneous dynamics can be extended to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. The present work describes the software implementation of inverse transform analysis of XPCS data. This software, calledCONTIN XPCS, is an extension of traditionalCONTINanalysis and accommodates the various dynamics encountered inmore » equilibrium XPCS measurements.« less

  5. ISLE (Image and Signal LISP Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-10-21

    Conventional software interfaces that use imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal-processing software (SIG). As an alternative, ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal LISP Environment (ISLE) is an example of an interpreted functional language interface based on common LISP. Advantages of ISLE include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence (AI)more » software. 10 refs.« less

  6. ISLE (Image and Signal Lisp Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-05-01

    Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less

  7. Development of system decision support tools for behavioral trends monitoring of machinery maintenance in a competitive environment

    NASA Astrophysics Data System (ADS)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani

    2017-06-01

    The article is centred on software system development for manufacturing company that produces polyethylene bags using mostly conventional machines in a competitive world where each business enterprise desires to stand tall. This is meant to assist in gaining market shares, taking maintenance and production decisions by the dynamism and flexibilities embedded in the package as customers' demand varies under the duress of meeting the set goals. The production and machine condition monitoring software (PMCMS) is programmed in C# and designed in such a way to support hardware integration, real-time machine conditions monitoring, which is based on condition maintenance approach, maintenance decision suggestions and suitable production strategies as the demand for products keeps changing in a highly competitive environment. PMCMS works with an embedded device which feeds it with data from the various machines being monitored at the workstation, and the data are read at the base station through transmission via a wireless transceiver and stored in a database. A case study was used in the implementation of the developed system, and the results show that it can monitor the machine's health condition effectively by displaying machines' health status, gives repair suggestions to probable faults, decides strategy for both production methods and maintenance, and, thus, can enhance maintenance performance obviously.

  8. Surveillance systems for intermodal transportation

    NASA Astrophysics Data System (ADS)

    Jakovlev, Sergej; Voznak, Miroslav; Andziulis, Arunas

    2015-05-01

    Intermodal container monitoring is considered a major security issue in many major logistic companies and countries worldwide. Current representation of the problem, we face today, originated in 2002, right after the 9/11 attacks. Then, a new worldwide Container Security Initiative (CSI, 2002) was considered that shaped the perception of the transportation operations. Now more than 80 larger ports all over the world contribute to its further development and integration into everyday transportation operations and improve the regulations for the developing regions. Although, these new improvements allow us to feel safer and secure, constant management of transportation operations has become a very difficult problem for conventional data analysis methods and information systems. The paper deals with a proposal of a whole new concept for the improvement of the Containers Security Initiative (CSI) by virtually connecting safety, security processes and systems. A conceptual middleware approach with deployable intelligent agent modules is proposed to be used with possible scenarios and a testbed is used to test the solution. Middleware examples are visually programmed using National Instruments LabView software packages and Wireless sensor network hardware modules. An experimental software is used to evaluate he solution. This research is a contribution to the intermodal transportation and is intended to be used as a means or the development of intelligent transport systems.

  9. SWIFT2: Software for continuous ensemble short-term streamflow forecasting for use in research and operations

    NASA Astrophysics Data System (ADS)

    Perraud, Jean-Michel; Bennett, James C.; Bridgart, Robert; Robertson, David E.

    2016-04-01

    Research undertaken through the Water Information Research and Development Alliance (WIRADA) has laid the foundations for continuous deterministic and ensemble short-term forecasting services. One output of this research is the software Short-term Water Information Forecasting Tools version 2 (SWIFT2). SWIFT2 is developed for use in research on short term streamflow forecasting techniques as well as operational forecasting services at the Australian Bureau of Meteorology. The variety of uses in research and operations requires a modular software system whose components can be arranged in applications that are fit for each particular purpose, without unnecessary software duplication. SWIFT2 modelling structures consist of sub-areas of hydrologic models, nodes and links with in-stream routing and reservoirs. While this modelling structure is customary, SWIFT2 is built from the ground up for computational and data intensive applications such as ensemble forecasts necessary for the estimation of the uncertainty in forecasts. Support for parallel computation on multiple processors or on a compute cluster is a primary use case. A convention is defined to store large multi-dimensional forecasting data and its metadata using the netCDF library. SWIFT2 is written in modern C++ with state of the art software engineering techniques and practices. A salient technical feature is a well-defined application programming interface (API) to facilitate access from different applications and technologies. SWIFT2 is already seamlessly accessible on Windows and Linux via packages in R, Python, Matlab and .NET languages such as C# and F#. Command line or graphical front-end applications are also feasible. This poster gives an overview of the technology stack, and illustrates the resulting features of SWIFT2 for users. Research and operational uses share the same common core C++ modelling shell for consistency, but augmented by different software modules suitable for each context. The accessibility via interactive modelling languages is particularly amenable to using SWIFT2 in exploratory research, with a dynamic and versatile experimental modelling workflow. This does not come at the expense of the stability and reliability required for use in operations, where only mature and stable components are used.

  10. [Comparison of root resorption between self-ligating and conventional brackets using cone-beam CT].

    PubMed

    Liu, Yun; Guo, Hong-ming

    2016-04-01

    To analyze the differences of root resorption between passive self-ligating and conventional brackets, and to determine the relationship between passive self-ligating brackets and root resorption. Fifty patients were randomly divided into 2 groups using passive self-ligating brackets or conventional straight wire brackets (0.022 system), respectively. Cone-beam CT was taken before and after treatment. The amount of external apical root resorption of maxillary incisors was measured on CBCT images. Student's t test was performed to analyze the differences of root apical resorption between the 2 groups with SPSS17.0 software package. No significant difference(P> 0.05) in root resorption of maxillary incisors was found between passive self-ligating brackets and conventional brackets. Passive self-ligating brackets and conventional brackets can cause root resorption, but the difference was not significant. Passive self-ligating brackets do not induce more root resorption.

  11. Simulation for assessment of bulk cargo berths number

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. L.; Kirichenko, A. V.; Slitsan, A. E.

    2017-10-01

    The world trade volumes of mineral resources have been growing constantly for decades, notwithstanding any economical crises. At the same time, the proximity of the bulk materials as products to the starting point of the integrated value added or logistic supply chain makes their unit price relatively low. This fact automatically causes a strong economic sensitivity of the supply chain to the level of operational expenses in every link. The core of the integrated logistic supply chain is its maritime segment, with the fleet and terminals (i.e. the cargo transportation system) serving as the base platform for it. In its turn, the terminal berths play a role of the interface between the fleet and the land-transportation sub-system. Current development of the maritime transportation technologies, ships and terminal specialization, vessel size growth, rationalization of route patterns, regionalization of trade etc., has made conventional calculation methods inadequate. The solution of the problem is in using object oriented simulation. At the same time, this approch usually assumes only ad hoc models. Thus, it does not provide the generality of its conventional analytical predecessors. The time and labor consumpting procedure of simulation results in a very narrow application domain of the model. This article describes a new simulation instrument, combining the generality of the analytical technoques with the efficiency of the object-oriented simulation. The approach implemented as a software module, which validity and adequacy are proved. The software was tested on several sea terminal design projects and confirmed its efficiency.

  12. Rapid Risk Evaluation (ER2) Using MS Excel Spreadsheet: a Case Study of Fredericton (new Brunswick, Canada)

    NASA Astrophysics Data System (ADS)

    McGrath, H.; Stefanakis, E.; Nastev, M.

    2016-06-01

    Conventional knowledge of the flood hazard alone (extent and frequency) is not sufficient for informed decision-making. The public safety community needs tools and guidance to adequately undertake flood hazard risk assessment in order to estimate respective damages and social and economic losses. While many complex computer models have been developed for flood risk assessment, they require highly trained personnel to prepare the necessary input (hazard, inventory of the built environment, and vulnerabilities) and analyze model outputs. As such, tools which utilize open-source software or are built within popular desktop software programs are appealing alternatives. The recently developed Rapid Risk Evaluation (ER2) application runs scenario based loss assessment analyses in a Microsoft Excel spreadsheet. User input is limited to a handful of intuitive drop-down menus utilized to describe the building type, age, occupancy and the expected water level. In anticipation of local depth damage curves and other needed vulnerability parameters, those from the U.S. FEMA's Hazus-Flood software have been imported and temporarily accessed in conjunction with user input to display exposure and estimated economic losses related to the structure and the content of the building. Building types and occupancies representative of those most exposed to flooding in Fredericton (New Brunswick) were introduced and test flood scenarios were run. The algorithm was successfully validated against results from the Hazus-Flood model for the same building types and flood depths.

  13. Mass separation of deuterium and helium with conventional quadrupole mass spectrometer by using varied ionization energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yaowei; Hu, Jiansheng, E-mail: hujs@ipp.ac.cn; Wan, Zhao

    2016-03-15

    Deuterium pressure in deuterium-helium mixture gas is successfully measured by a common quadrupole mass spectrometer (model: RGA200) with a resolution of ∼0.5 atomic mass unit (AMU), by using varied ionization energy together with new developed software and dedicated calibration for RGA200. The new software is developed by using MATLAB with the new functions: electron energy (EE) scanning, deuterium partial pressure measurement, and automatic data saving. RGA200 with new software is calibrated in pure deuterium and pure helium 1.0 × 10{sup −6}–5.0 × 10{sup −2} Pa, and the relation between pressure and ion current of AMU4 under EE = 25 eVmore » and EE = 70 eV is obtained. From the calibration result and RGA200 scanning with varied ionization energy in deuterium and helium mixture gas, both deuterium partial pressures (P{sub D{sub 2}}) and helium partial pressure (P{sub He}) could be obtained. The result shows that deuterium partial pressure could be measured if P{sub D{sub 2}} > 10{sup −6} Pa (limited by ultimate pressure of calibration vessel), and helium pressure could be measured only if P{sub He}/P{sub D{sub 2}} > 0.45, and the measurement error is evaluated as 15%. This method is successfully employed in EAST 2015 summer campaign to monitor deuterium outgassing/desorption during helium discharge cleaning.« less

  14. Digital diagnosis of medical images

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Kuismin, Raimo; Jormalainen, Raimo; Dastidar, Prasun; Frey, Harry; Eskola, Hannu

    2001-08-01

    The popularity of digital imaging devices and PACS installations has increased during the last years. Still, images are analyzed and diagnosed using conventional techniques. Our research group begun to study the requirements for digital image diagnostic methods to be applied together with PACS systems. The research was focused on various image analysis procedures (e.g., segmentation, volumetry, 3D visualization, image fusion, anatomic atlas, etc.) that could be useful in medical diagnosis. We have developed Image Analysis software (www.medimag.net) to enable several image-processing applications in medical diagnosis, such as volumetry, multimodal visualization, and 3D visualizations. We have also developed a commercial scalable image archive system (ActaServer, supports DICOM) based on component technology (www.acta.fi), and several telemedicine applications. All the software and systems operate in NT environment and are in clinical use in several hospitals. The analysis software have been applied in clinical work and utilized in numerous patient cases (500 patients). This method has been used in the diagnosis, therapy and follow-up in various diseases of the central nervous system (CNS), respiratory system (RS) and human reproductive system (HRS). In many of these diseases e.g. Systemic Lupus Erythematosus (CNS), nasal airways diseases (RS) and ovarian tumors (HRS), these methods have been used for the first time in clinical work. According to our results, digital diagnosis improves diagnostic capabilities, and together with PACS installations it will become standard tool during the next decade by enabling more accurate diagnosis and patient follow-up.

  15. Subsurface Transport Over Multiple Phases Demonstration Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-01-05

    The STOMP simulator is a suite of numerical simulators developed by Pacific Northwest National Laboratory for addressing problems involving coupled multifluid hydrologic, thermal, geochemical, and geomechanical processes in the subsurface. The simulator has been applied to problems concerning environmental remediation, environmental stewardship, carbon sequestration, conventional petroleum production, and the production of unconventional hydrocarbon fuels. The simulator is copyrighted by Battelle Memorial Institute, and is available outside of PNNL via use agreements. To promote the open exchange of scientific ideas the simulator is provided as source code. A demonstration version of the simulator has been developed, which will provide potential newmore » users with an executable (not source code) implementation of the software royalty free. Demonstration versions will be offered via the STOMP website for all currently available operational modes of the simulator. The demonstration versions of the simulator will be configured with the direct banded linear system solver and have a limit of 1,000 active grid cells. This will provide potential new users with an opportunity to apply the code to simple problems, including many of the STOMP short course problems, without having to pay a license fee. Users will be required to register on the STOMP website prior to receiving an executable.« less

  16. Quantitative analysis of cardiovascular MR images.

    PubMed

    van der Geest, R J; de Roos, A; van der Wall, E E; Reiber, J H

    1997-06-01

    The diagnosis of cardiovascular disease requires the precise assessment of both morphology and function. Nearly all aspects of cardiovascular function and flow can be quantified nowadays with fast magnetic resonance (MR) imaging techniques. Conventional and breath-hold cine MR imaging allow the precise and highly reproducible assessment of global and regional left ventricular function. During the same examination, velocity encoded cine (VEC) MR imaging provides measurements of blood flow in the heart and great vessels. Quantitative image analysis often still relies on manual tracing of contours in the images. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with the manual and tedious processing of the images. Recent progress in MR imaging of the coronary arteries and myocardial perfusion imaging with contrast media, along with the further development of faster imaging sequences, suggest that MR imaging could evolve into a single technique ('one stop shop') for the evaluation of many aspects of heart disease. As a result, it is very likely that the need for automated image segmentation and analysis software algorithms will further increase. In this paper the developments directed towards the automated image analysis and semi-automated contour detection for cardiovascular MR imaging are presented.

  17. Large autonomous spacecraft electrical power system (LASEPS)

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Johnson, Yvette B.

    1992-01-01

    NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed.

  18. Relative Displacement Method for Track-Structure Interaction

    PubMed Central

    Ramos, Óscar Ramón; Pantaleón, Marcos J.

    2014-01-01

    The track-structure interaction effects are usually analysed with conventional FEM programs, where it is difficult to implement the complex track-structure connection behaviour, which is nonlinear, elastic-plastic and depends on the vertical load. The authors developed an alternative analysis method, which they call the relative displacement method. It is based on the calculation of deformation states in single DOF element models that satisfy the boundary conditions. For its solution, an iterative optimisation algorithm is used. This method can be implemented in any programming language or analysis software. A comparison with ABAQUS calculations shows a very good result correlation and compliance with the standard's specifications. PMID:24634610

  19. A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension

    PubMed Central

    de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba

    2011-01-01

    The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607

  20. First international two-way satellite time and frequency transfer experiment employing dual pseudo-random noise codes.

    PubMed

    Tseng, Wen-Hung; Huang, Yi-Jiun; Gotoh, Tadahiro; Hobiger, Thomas; Fujieda, Miho; Aida, Masanori; Li, Tingyu; Lin, Shinn-Yan; Lin, Huang-Tien; Feng, Kai-Ming

    2012-03-01

    Two-way satellite time and frequency transfer (TWSTFT) is one of the main techniques used to compare atomic time scales over long distances. To both improve the precision of TWSTFT and decrease the satellite link fee, a new software-defined modem with dual pseudo-random noise (DPN) codes has been developed. In this paper, we demonstrate the first international DPN-based TWSTFT experiment over a period of 6 months. The results of DPN exhibit excellent performance, which is competitive with the Global Positioning System (GPS) precise point positioning (PPP) technique in the short-term and consistent with the conventional TWSTFT in the long-term. Time deviations of less than 75 ps are achieved for averaging times from 1 s to 1 d. Moreover, the DPN data has less diurnal variation than that of the conventional TWSTFT. Because the DPN-based system has advantages of higher precision and lower bandwidth cost, it is one of the most promising methods to improve international time-transfer links.

  1. A pilot study comparing the effectiveness of conventional training and virtual reality simulation in the skills acquisition of junior dental students.

    PubMed

    Quinn, Frank; Keogh, Paul; McDonald, Ailbhe; Hussey, David

    2003-02-01

    The use of virtual reality (VR) in the training of operative dentistry is a recent innovation and little research has been published on its efficacy compared to conventional training methods. To evaluate possible benefits, junior undergraduate dental students were randomly assigned to one of three groups: group 1 as taught by conventional means only; group 2 as trained by conventional means combined with VR repetition and reinforcement (with access to a human instructor for operative advice); and group 3 as trained by conventional means combined with VR repetition and reinforcement, but without instructor evaluation/advice, which was only supplied via the VR-associated software. At the end of the research period, all groups executed two class 1 preparations that were evaluated blindly by 'expert' trainers, under traditional criteria (outline, retention, smoothness, depth, wall angulation and cavity margin index). Analyses of resulting scores indicated a lack of significant differences between the three groups except for scores for the category of 'outline form', for group 2, which produced significantly lower (i.e. better) scores than the conventionally trained group. A statistical comparison between scores from two 'expert' examiners indicated lack of agreement, despite identical written and visual criteria being used for evaluation by both. Both examiners, however, generally showed similar trends in evaluation. An anonymous questionnaire suggested that students recognized the benefits of VR training (e.g. ready access to assessment, error identification and how they can be corrected), but the majority felt that it would not replace conventional training methods (95%), although participants recognized the potential for development of VR systems in dentistry. The most common reasons cited for the preference of conventional training were excessive critical feedback (55%), lack of personal contact (50%) and technical hardware difficulties (20%) associated with VR-based training.

  2. The UGRID Reader - A ParaView Plugin for the Visualization of Unstructured Climate Model Data in NetCDF Format

    NASA Astrophysics Data System (ADS)

    Brisc, Felicia; Vater, Stefan; Behrens, Joern

    2016-04-01

    We present the UGRID Reader, a visualization software component that implements the UGRID Conventions into Paraview. It currently supports the reading and visualization of 2D unstructured triangular, quadrilateral and mixed triangle/quadrilateral meshes, while the data can be defined per cell or per vertex. The Climate and Forecast Metadata Conventions (CF Conventions) have been set for many years as the standard framework for climate data written in NetCDF format. While they allow storing unstructured data simply as data defined at a series of points, they do not currently address the topology of the underlying unstructured mesh. However, it is often necessary to have additional mesh topology information, i.e. is it a one dimensional network, a 2D triangular mesh or a flexible mixed triangle/quadrilateral mesh, a 2D mesh with vertical layers, or a fully unstructured 3D mesh. The UGRID Conventions proposed by the UGRID Interoperability group are attempting to fill in this void by extending the CF Conventions with topology specifications. As the UGRID Conventions are increasingly popular with an important subset of the CF community, they warrant the development of a customized tool for the visualization and exploration of UGRID-conforming data. The implementation of the UGRID Reader has been designed corresponding to the ParaView plugin architecture. This approach allowed us to tap into the powerful reading and rendering capabilities of ParaView, while the reader is easy to install. We aim at parallelism to be able to process large data sets. Furthermore, our current application of the reader is the visualization of higher order simulation output which demands for a special representation of the data within a cell.

  3. HapHop-Physio: a computer game to support cognitive therapies in children

    PubMed Central

    Rico-Olarte, Carolina; López, Diego M; Narváez, Santiago; Farinango, Charic D; Pharow, Peter S

    2017-01-01

    Background Care and support of children with physical or mental disabilities are accompanied with serious concerns for parents, families, healthcare institutions, schools, and their communities. Recent studies and technological innovations have demonstrated the feasibility of providing therapy and rehabilitation services to children supported by computer games. Objective The aim of this paper is to present HapHop-Physio, an innovative computer game that combines exercise with fun and learning, developed to support cognitive therapies in children. Methods Conventional software engineering methods such as the Scrum methodology, a functionality test and a related usability test, were part of the comprehensive methodology adapted to develop HapHop-Physio. Results The game supports visual and auditory attention therapies, as well as visual and auditory memory activities. The game was developed by a multidisciplinary team, which was based on the Hopscotch® platform provided by Fraunhofer Institute for Digital Media Technology IDMT Institute in Germany, and designed in collaboration with a rehabilitation clinic in Colombia. HapHop-Physio was tested and evaluated to probe its functionality and user satisfaction. Conclusion The results show the development of an easy-to-use and funny game by a multidisciplinary team using state-of-the-art videogame technologies and software methodologies. Children testing the game concluded that they would like to play again while undergoing rehabilitation therapies. PMID:28740440

  4. GEMPAK5 user's guide, version 5.0

    NASA Technical Reports Server (NTRS)

    Desjardins, Mary L.; Brill, Keith F.; Schotz, Steven S.

    1991-01-01

    GEMPAK is a general meteorological software package used to analyze and display conventional meteorological data as well as satellite derived parameters. The User's Guide describes the GEMPAK5 programs and input parameters and details the algorithms used for the meteorological computations.

  5. Innovative Technology in Engineering Education.

    ERIC Educational Resources Information Center

    Fishwick, Wilfred

    1991-01-01

    Discusses the impact that computer-assisted technologies, including applications to software, video recordings, and satellite broadcasts, have had upon the conventions and procedures within engineering education. Calls for the complete utilization of such devices through their appropriate integration into updated education activities effectively…

  6. IRISpy: Analyzing IRIS Data in Python

    NASA Astrophysics Data System (ADS)

    Ryan, Daniel; Christe, Steven; Mumford, Stuart; Baruah, Ankit; Timothy, Shelbe; Pereira, Tiago; De Pontieu, Bart

    2017-08-01

    IRISpy is a new community-developed open-source software library for analysing IRIS level 2 data. It is written in Python, a free, cross-platform, general-purpose, high-level programming language. A wide array of scientific computing software packages have already been developed in Python, from numerical computation (NumPy, SciPy, etc.), to visualization and plotting (matplotlib), to solar-physics-specific data analysis (SunPy). IRISpy is currently under development as a SunPy-affiliated package which means it depends on the SunPy library, follows similar standards and conventions, and is developed with the support of of the SunPy development team. IRISpy’s has two primary data objects, one for analyzing slit-jaw imager data and another for analyzing spectrograph data. Both objects contain basic slicing, indexing, plotting, and animating functionality to allow users to easily inspect, reduce and analyze the data. As part of this functionality the objects can output SunPy Maps, TimeSeries, Spectra, etc. of relevant data slices for easier inspection and analysis. Work is also ongoing to provide additional data analysis functionality including derivation of systematic measurement errors (e.g. readout noise), exposure time correction, residual wavelength calibration, radiometric calibration, and fine scale pointing corrections. IRISpy’s code base is publicly available through github.com and can be contributed to by anyone. In this poster we demonstrate IRISpy’s functionality and future goals of the project. We also encourage interested users to become involved in further developing IRISpy.

  7. Expert System Development Methodology (ESDM)

    NASA Technical Reports Server (NTRS)

    Sary, Charisse; Gilstrap, Lewey; Hull, Larry G.

    1990-01-01

    The Expert System Development Methodology (ESDM) provides an approach to developing expert system software. Because of the uncertainty associated with this process, an element of risk is involved. ESDM is designed to address the issue of risk and to acquire the information needed for this purpose in an evolutionary manner. ESDM presents a life cycle in which a prototype evolves through five stages of development. Each stage consists of five steps, leading to a prototype for that stage. Development may proceed to a conventional development methodology (CDM) at any time if enough has been learned about the problem to write requirements. ESDM produces requirements so that a product may be built with a CDM. ESDM is considered preliminary because is has not yet been applied to actual projects. It has been retrospectively evaluated by comparing the methods used in two ongoing expert system development projects that did not explicitly choose to use this methodology but which provided useful insights into actual expert system development practices and problems.

  8. Leveraging Modeling Approaches: Reaction Networks and Rules

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349

  9. Leveraging modeling approaches: reaction networks and rules.

    PubMed

    Blinov, Michael L; Moraru, Ion I

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.

  10. The HDF Product Designer - Interoperability in the First Mile

    NASA Astrophysics Data System (ADS)

    Lee, H.; Jelenak, A.; Habermann, T.

    2014-12-01

    Interoperable data have been a long-time goal in many scientific communities. The recent growth in analysis, visualization and mash-up applications that expect data stored in a standardized manner has brought the interoperability issue to the fore. On the other hand, producing interoperable data is often regarded as a sideline task in a typical research team for which resources are not readily available. The HDF Group is developing a software tool aimed at lessening the burden of creating data in standards-compliant, interoperable HDF5 files. The tool, named HDF Product Designer, lowers the threshold needed to design such files by providing a user interface that combines the rich HDF5 feature set with applicable metadata conventions. Users can quickly devise new HDF5 files while at the same time seamlessly incorporating the latest best practices and conventions from their community. That is what the term interoperability in the first mile means: enabling generation of interoperable data in HDF5 files from the onset of their production. The tool also incorporates collaborative features, allowing team approach in the file design, as well as easy transfer of best practices as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from interested parties is always welcome.

  11. Combining PubMed knowledge and EHR data to develop a weighted bayesian network for pancreatic cancer prediction.

    PubMed

    Zhao, Di; Weng, Chunhua

    2011-10-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Combining PubMed Knowledge and EHR Data to Develop a Weighted Bayesian Network for Pancreatic Cancer Prediction

    PubMed Central

    Zhao, Di; Weng, Chunhua

    2011-01-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. PMID:21642013

  13. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  14. Confocal microscopy with strip mosaicing for rapid imaging over large areas of excised tissue

    NASA Astrophysics Data System (ADS)

    Abeytunge, Sanjee; Li, Yongbiao; Larson, Bjorg; Peterson, Gary; Seltzer, Emily; Toledo-Crow, Ricardo; Rajadhyaksha, Milind

    2013-06-01

    Confocal mosaicing microscopy is a developing technology platform for imaging tumor margins directly in freshly excised tissue, without the processing required for conventional pathology. Previously, mosaicing on 12-×-12 mm2 of excised skin tissue from Mohs surgery and detection of basal cell carcinoma margins was demonstrated in 9 min. Last year, we reported the feasibility of a faster approach called "strip mosaicing," which was demonstrated on a 10-×-10 mm2 of tissue in 3 min. Here we describe further advances in instrumentation, software, and speed. A mechanism was also developed to flatten tissue in order to enable consistent and repeatable acquisition of images over large areas. We demonstrate mosaicing on 10-×-10 mm2 of skin tissue with 1-μm lateral resolution in 90 s. A 2.5-×-3.5 cm2 piece of breast tissue was scanned with 0.8-μm lateral resolution in 13 min. Rapid mosaicing of confocal images on large areas of fresh tissue potentially offers a means to perform pathology at the bedside. Imaging of tumor margins with strip mosaicing confocal microscopy may serve as an adjunct to conventional (frozen or fixed) pathology for guiding surgery.

  15. Cost and Time Effectiveness Analysis of a Telemedicine Service in Bangladesh.

    PubMed

    Sorwar, Golam; Rahamn, Md Mustafizur; Uddin, Ramiz; Hoque, Md Rakibul

    2016-01-01

    Telemedicine has great potential to overcome geographical barriers to providing access to equal health care services, particularly for people living in remote and rural areas in developing countries like Bangladesh. A number of telemedicine systems have been implemented in Bangladesh. However, no significant studies have been conducted to determine either their cost effectiveness or efficiency in reducing travel time required by patients. In addition, very few studies have analyzed the attitude and level of satisfaction of telemedicine service recipients in Bangladesh. The aim of this study was to analyze the cost and time effectiveness of a telemedicine service, implemented through locally developed PC based diagnostic equipment and software in Bangladesh, compared to conventional means of providing those services. The study revealed that the introduced telemedicine service reduced cost and travel time on average by 56% and 94% respectively compared to its counterpart conventional approach. The study also revealed that majority of users were highly satisfied with the newly introduced telemedicine service. Therefore, the introduced telemedicine service can be considered as a low cost and time efficient health service solution to improve health care facilities in the remote rural areas in Bangladesh.

  16. Biomolecular computers with multiple restriction enzymes

    PubMed Central

    Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz

    2017-01-01

    Abstract The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann “bottleneck”. Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro’s group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases. PMID:29064510

  17. Performance evaluation of radiant cooling system integrated with air system under different operational strategies

    DOE PAGES

    Khan, Yasin; Khare, Vaibhav Rai; Mathur, Jyotirmay; ...

    2015-03-26

    The paper describes a parametric study developed to estimate the energy savings potential of a radiant cooling system installed in a commercial building in India. The study is based on numerical modeling of a radiant cooling system installed in an Information Technology (IT) office building sited in the composite climate of Hyderabad. To evaluate thermal performance and energy consumption, simulations were carried out using the ANSYS FLUENT and EnergyPlus softwares, respectively. The building model was calibrated using the measured data for the installed radiant system. Then this calibrated model was used to simulate the energy consumption of a building usingmore » a conventional all-air system to determine the proportional energy savings. For proper handling of the latent load, a dedicated outside air system (DOAS) was used as an alternative to Fan Coil Unit (FCU). A comparison of energy consumption calculated that the radiant system was 17.5 % more efficient than a conventional all-air system and that a 30% savings was achieved by using a DOAS system compared with a conventional system. Computational Fluid Dynamics (CFD) simulation was performed to evaluate indoor air quality and thermal comfort. It was found that a radiant system offers more uniform temperatures, as well as a better mean air temperature range, than a conventional system. To further enhance the energy savings in the radiant system, different operational strategies were analyzed based on thermal analysis using EnergyPlus. Lastly, the energy savings achieved in this parametric run were more than 10% compared with a conventional all-air system.« less

  18. An industrial radiography exposure device based on measurement of transmitted gamma-ray intensity

    NASA Astrophysics Data System (ADS)

    Polee, C.; Chankow, N.; Srisatit, S.; Thong-Aram, D.

    2015-05-01

    In film radiography, underexposure and overexposure may happen particularly when lacking information of specimen material and hollowness. This paper describes a method and a device for determining exposure in industrial gamma-ray radiography based on quick measurement of transmitted gamma-ray intensity with a small detector. Application software was developed for Android mobile phone to remotely control the device and to display counting data via Bluetooth communication. Prior to film exposure, the device is placed behind a specimen to measure transmitted intensity which is inversely proportional to the exposure. Unlike in using the conventional exposure curve, correction factors for source decay, source-to- film distance, specimen thickness and kind of material are not needed. The developed technique and device make radiographic process economic, convenient and more reliable.

  19. Development of a time-variable nuclear pulser for half life measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahn, Guilherme S.; Domienikan, Claudio; Carvalhaes, Roberto P. M.

    2013-05-06

    In this work a time-variable pulser system with an exponentially-decaying pulse frequency is presented, which was developed using the low-cost, open-source Arduino microcontroler plataform. In this system, the microcontroller produces a TTL signal in the selected rate and a pulse shaper board adjusts it to be entered in an amplifier as a conventional pulser signal; both the decay constant and the initial pulse rate can be adjusted using a user-friendly control software, and the pulse amplitude can be adjusted using a potentiometer in the pulse shaper board. The pulser was tested using several combinations of initial pulse rate and decaymore » constant, and the results show that the system is stable and reliable, and is suitable to be used in half-life measurements.« less

  20. Note: Tesla based pulse generator for electrical breakdown study of liquid dielectrics

    NASA Astrophysics Data System (ADS)

    Veda Prakash, G.; Kumar, R.; Patel, J.; Saurabh, K.; Shyam, A.

    2013-12-01

    In the process of studying charge holding capability and delay time for breakdown in liquids under nanosecond (ns) time scales, a Tesla based pulse generator has been developed. Pulse generator is a combination of Tesla transformer, pulse forming line, a fast closing switch, and test chamber. Use of Tesla transformer over conventional Marx generators makes the pulse generator very compact, cost effective, and requires less maintenance. The system has been designed and developed to deliver maximum output voltage of 300 kV and rise time of the order of tens of nanoseconds. The paper deals with the system design parameters, breakdown test procedure, and various experimental results. To validate the pulse generator performance, experimental results have been compared with PSPICE simulation software and are in good agreement with simulation results.

  1. GEMPAK5. Part 2: GEMPLT programmer's guide, version 5.0

    NASA Technical Reports Server (NTRS)

    Desjardins, Mary L.; Brill, Keith F.; Schotz, Steven S.

    1991-01-01

    GEMPAK is a general meteorological software package used to analyze and display conventional meteorological data as well as satellite derived parameters. The GEMPAK Programmer's Guide describes the subroutines which can be used in the GEMPAK graphics and transformation subsystem, GEMPLT.

  2. Computer software configuration description, 241-AY and 241-AZ tank farm MICON automation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelman, W.D.

    This document describes the configuration process, choices and conventions used during the configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 2 incorporates minor changes to ensure the document setpoints accurately reflect limits (including exhaust stack flow of 800 scfm) established in OSD-T-151-00019. The MICON DCS software controls and monitors the instrumentation and equipment associated with plant systems and processes.

  3. pcircle - A Suite of Scalable Parallel File System Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  4. Contribution of 3D inversion of Electrical Resistivity Tomography data applied to volcanic structures

    NASA Astrophysics Data System (ADS)

    Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe

    2016-04-01

    The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for geological structures imaging. Such structures can present complex characteristics that conventional 2D inversion processes cannot perfectly integrate. Here we present a new 3D inversion algorithm named EResI, firstly developed for levee investigation, and presently applied to the study of a complex lava dome (the Puy de Dôme volcano, France). EResI algorithm is based on a conventional regularized Gauss-Newton inversion scheme and a 3D non-structured discretization of the model (double grid method based on tetrahedrons). This discretization allows to accurately model the topography of investigated structure (without a mesh deformation procedure) and also permits a precise location of the electrodes. Moreover, we demonstrate that a complete 3D unstructured discretization limits the number of inversion cells and is better adapted to the resolution capacity of tomography than a structured discretization. This study shows that a 3D inversion with a non-structured parametrization has some advantages compared to classical 2D inversions. The first advantage comes from the fact that a 2D inversion leads to artefacts due to 3D effects (3D topography, 3D internal resistivity). The second advantage comes from the fact that the capacity to experimentally align electrodes along an axis (for 2D surveys) depends on the constrains on the field (topography...). In this case, a 2D assumption induced by 2.5D inversion software prevents its capacity to model electrodes outside this axis leading to artefacts in the inversion result. The last limitation comes from the use of mesh deformation techniques used to accurately model the topography in 2D softwares. This technique used for structured discretization (Res2dinv) is prohibed for strong topography (>60 %) and leads to a small computational errors. A wide geophysical survey was carried out on the Puy de Dôme volcano resulting in 12 ERT profiles with approximatively 800 electrodes. We performed two processing stages by inverting independently each profiles in 2D (RES2DINV software) and the complete data set in 3D (EResI). The comparison of the 3D inversion results with those obtained through a conventional 2D inversion process underlined that EResI allows to accurately take into account the random electrodes positioning and reduce out-line artefacts into the inversion models due to positioning errors out of the profile axis. This comparison also highlighted the advantages to integrate several ERT lines to compute the 3D models of complex volcanic structures. Finally, the resulting 3D model allows a better interpretation of the Puy de Dome Volcano.

  5. Sustainable and non-conventional monitoring systems to mitigate natural hazards in low income economies: the 4onse project approach.

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Ratnayake, Rangajeewa; Antonovic, Milan; Strigaro, Daniele

    2017-04-01

    Environmental monitoring systems in low economies countries are often in decline, outdated or missing with the consequence that there is a very scarce availability and accessibility to these information that are vital for coping and mitigating natural hazards. Non-conventional monitoring systems based on open technologies may constitute a viable solution to create low cost and sustainable monitoring systems that may be fully developed, deployed and maintained at local level without lock-in dependances on copyrights or patents or high costs of replacements. The 4onse research project , funded under the Research for Development program of the Swiss National Science Foundation and the Swiss Office for Development and Cooperation, propose a complete monitoring system that integrates Free & Open Source Software, Open Hardware, Open Data, and Open Standards. After its engineering, it will be tested in the Deduru Oya catchment (Sri Lanka) to evaluate the system and develop a water management information system to optimize the regulation of artificial basins levels and mitigate flash floods. One of the objective is to better scientifically understand strengths, criticalities and applicabilities in terms of data quality; system durability; management costs; performances; sustainability. Results, challenges and experiences from the first six months of the projects will be presented with particular focus on the activities of synergies building and data collection and dissemination system advances.

  6. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  7. Follicular fluid dehydroepiandrosterone sulfate is a credible marker of oocyte maturity and pregnancy outcome in conventional in vitro fertilization cycles.

    PubMed

    Chimote, Natachandra M; Nath, Nirmalendu M; Chimote, Nishad N; Chimote, Bindu N

    2015-01-01

    To investigate if the level of dehydroepiandrosterone sulfate (DHEA-s) in follicular fluid (FF) influences the competence of oocytes to fertilize, develop to the blastocyst stage, and produce a viable pregnancy in conventional in vitro fertilization (IVF) cycles. Prospective study of age-matched, nonpolycystic ovary syndrome (PCOS) women undergoing antagonist stimulation protocol involving conventional insemination and day 5 blastocyst transfer. FF levels of DHEA-s and E2 were measured by a radio-immuno-assay method using diagnostic kits. Fertilization rate, embryo development to the blastocyst stage and live birth rate were main outcome measures. Cycles were divided into pregnant/nonpregnant groups and also into low/medium/high FF DHEA-s groups. Statistical analysis was done by GraphPad Prism V software. FF DHEA-s levels were significantly higher in pregnant (n = 111) compared to nonpregnant (n = 381) group (1599 ± 77.45 vs. 1372 ± 40.47 ng/ml; P = 0.01). High (n = 134) FF DHEA-s group had significantly higher percentage of metaphase II (MII) oocytes (91.5 vs. 85.54 vs. 79.44%, P < 0.0001), fertilization rate (78.86 vs. 74.16 vs. 71.26%, P < 0.0001), cleavage rate (83.67 vs. 69.1 vs. 66.17%, P = 0.0002), blastocyst formation rate (37.15 vs. 33.01 vs. 26.95%, P < 0.0001), and live birth rate (29.85 vs. 22.22 vs. 14.78%, P = 0.017) compared to medium (n = 243) and low (n = 115) FF DHEA-s groups, respectively despite comparable number of oocytes retrieved and number of blastocysts transferred. FF DHEA-s levels correlated significantly with the attainment of MII oocytes (Pearson r = 0.41) and fertilization rates (Pearson r = 0.35). FF DHEA-s level influences the oocyte maturation process and is predictive of fertilization, embryo development to the blastocyst stage and live birth rates in non-PCOS women undergoing conventional IVF cycles.

  8. A Quantitative Study of Global Software Development Teams, Requirements, and Software Projects

    ERIC Educational Resources Information Center

    Parker, Linda L.

    2016-01-01

    The study explored the relationship between global software development teams, effective software requirements, and stakeholders' perception of successful software development projects within the field of information technology management. It examined the critical relationship between Global Software Development (GSD) teams creating effective…

  9. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  10. Can light-field photography ease focusing on the scalp and oral cavity?

    PubMed

    Taheri, Arash; Feldman, Steven R

    2013-08-01

    Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Using COMSOL Multiphysics Software to Model Anisotropic Dielectric and Metamaterial Effects in Folded-Waveguide Traveling-Wave Tube Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Starinshak, David P.; Smith, Nathan D.; Wilson, Jeffrey D.

    2008-01-01

    The electromagnetic effects of conventional dielectrics, anisotropic dielectrics, and metamaterials were modeled in a terahertz-frequency folded-waveguide slow-wave circuit. Results of attempts to utilize these materials to increase efficiency are presented.

  12. GEMPAK5. Part 1: GEMPAK5 programmer's guide, version 5.0

    NASA Technical Reports Server (NTRS)

    Desjardins, Mary L.; Brill, Keith F.; Schotz, Steven S.

    1991-01-01

    GEMPAK is a general meteorological software package used to analyze and display conventional meteorological data as well as satellite derived parameters. The Programmer's Guide describes the subroutines which can be used to build new GEMPAK programs. Part 1 contains GEMPAK subroutines.

  13. Digital Textbooks. Research Brief

    ERIC Educational Resources Information Center

    Johnston, Howard

    2011-01-01

    Despite their growing popularity, digital alternatives to conventional textbooks are stirring up controversy. With the introduction of tablet computers, and the growing trend toward "cloud computing" and "open source" software, the trend is accelerating because costs are coming down and free or inexpensive materials are becoming more available.…

  14. C style guide

    NASA Technical Reports Server (NTRS)

    Doland, Jerry; Valett, Jon

    1994-01-01

    This document discusses recommended practices and style for programmers using the C language in the Flight Dynamics Division environment. Guidelines are based on generally recommended software engineering techniques, industry resources, and local convention. The Guide offers preferred solutions to common C programming issues and illustrates through examples of C Code.

  15. OPTiM: Optical projection tomography integrated microscope using open-source hardware and software

    PubMed Central

    Andrews, Natalie; Davis, Samuel; Bugeon, Laurence; Dallman, Margaret D.; McGinty, James

    2017-01-01

    We describe the implementation of an OPT plate to perform optical projection tomography (OPT) on a commercial wide-field inverted microscope, using our open-source hardware and software. The OPT plate includes a tilt adjustment for alignment and a stepper motor for sample rotation as required by standard projection tomography. Depending on magnification requirements, three methods of performing OPT are detailed using this adaptor plate: a conventional direct OPT method requiring only the addition of a limiting aperture behind the objective lens; an external optical-relay method allowing conventional OPT to be performed at magnifications >4x; a remote focal scanning and region-of-interest method for improved spatial resolution OPT (up to ~1.6 μm). All three methods use the microscope’s existing incoherent light source (i.e. arc-lamp) and all of its inherent functionality is maintained for day-to-day use. OPT acquisitions are performed on in vivo zebrafish embryos to demonstrate the implementations’ viability. PMID:28700724

  16. Realtime Multichannel System for Beat to Beat QT Interval Variability

    NASA Technical Reports Server (NTRS)

    Starc, Vito; Schlegel, Todd T.

    2006-01-01

    The measurement of beat-to-beat QT interval variability (QTV) shows clinical promise for identifying several types of cardiac pathology. However, until now, there has been no device capable of displaying, in real time on a beattobeat basis, changes in QTV in all 12 conventional leads in a continuously monitored patient. While several software programs have been designed to analyze QTV, heretofore, such programs have all involved only a few channels (at most) and/or have required laborious user interaction or offline calculations and postprocessing, limiting their clinical utility. This paper describes a PC-based ECG software program that in real time, acquires, analyzes and displays QTV and also PQ interval variability (PQV) in each of the eight independent channels that constitute the 12lead conventional ECG. The system also processes certain related signals that are derived from singular value decomposition and that help to reduce the overall effects of noise on the realtime QTV and PQV results.

  17. Automated Reuse of Scientific Subroutine Libraries through Deductive Synthesis

    NASA Technical Reports Server (NTRS)

    Lowry, Michael R.; Pressburger, Thomas; VanBaalen, Jeffrey; Roach, Steven

    1997-01-01

    Systematic software construction offers the potential of elevating software engineering from an art-form to an engineering discipline. The desired result is more predictable software development leading to better quality and more maintainable software. However, the overhead costs associated with the formalisms, mathematics, and methods of systematic software construction have largely precluded their adoption in real-world software development. In fact, many mainstream software development organizations, such as Microsoft, still maintain a predominantly oral culture for software development projects; which is far removed from a formalism-based culture for software development. An exception is the limited domain of safety-critical software, where the high-assuiance inherent in systematic software construction justifies the additional cost. We believe that systematic software construction will only be adopted by mainstream software development organization when the overhead costs have been greatly reduced. Two approaches to cost mitigation are reuse (amortizing costs over many applications) and automation. For the last four years, NASA Ames has funded the Amphion project, whose objective is to automate software reuse through techniques from systematic software construction. In particular, deductive program synthesis (i.e., program extraction from proofs) is used to derive a composition of software components (e.g., subroutines) that correctly implements a specification. The construction of reuse libraries of software components is the standard software engineering solution for improving software development productivity and quality.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nawrocki, G.J.; Seaver, C.L.; Kowalkowski, J.B.

    As controls needs at the Advanced Photon Source matured from an installation phase to an operational phase, the need to monitor the existing conventional facilities control system with the EPICS-based accelerator control system was realized. This existing conventional facilities control network is based on a proprietary system from Johnson Controls called Metasys. Initially read-only monitoring of the Metasys parameters will be provided; however, the ability for possible future expansion to full control is available. This paper describes a method of using commercially available hardware and existing EPICS software as a bridge between the Metasys and EPICS control systems.

  19. Artefacts found in computed radiography.

    PubMed

    Cesar, L J; Schueler, B A; Zink, F E; Daly, T R; Taubel, J P; Jorgenson, L L

    2001-02-01

    Artefacts on radiographic images are distracting and may compromise accurate diagnosis. Although most artefacts that occur in conventional radiography have become familiar, computed radiography (CR) systems produce artefacts that differ from those found in conventional radiography. We have encountered a variety of artefacts in CR images that were produced from four different models plate reader. These artefacts have been identified and traced to the imaging plate, plate reader, image processing software or laser printer or to operator error. Understanding the potential sources of CR artefacts will aid in identifying and resolving problems quickly and help prevent future occurrences.

  20. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  1. The Effects of Development Team Skill on Software Product Quality

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.

    2006-01-01

    This paper provides an analysis of the effect of the skill/experience of the software development team on the quality of the final software product. A method for the assessment of software development team skill and experience is proposed, and was derived from a workforce management tool currently in use by the National Aeronautics and Space Administration. Using data from 26 smallscale software development projects, the team skill measures are correlated to 5 software product quality metrics from the ISO/IEC 9126 Software Engineering Product Quality standard. in the analysis of the results, development team skill is found to be a significant factor in the adequacy of the design and implementation. In addition, the results imply that inexperienced software developers are tasked with responsibilities ill-suited to their skill level, and thus have a significant adverse effect on the quality of the software product. Keywords: software quality, development skill, software metrics

  2. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  3. Modulated and continuous-wave operations of low-power thulium (Tm:YAP) laser in tissue welding

    NASA Astrophysics Data System (ADS)

    Bilici, Temel; Tabakoğlu, Haşim Özgür; Topaloğlu, Nermin; Kalaycıoğlu, Hamit; Kurt, Adnan; Sennaroglu, Alphan; Gülsoy, Murat

    2010-05-01

    Our aim is to explore the welding capabilities of a thulium (Tm:YAP) laser in modulated and continuous-wave (CW) modes of operation. The Tm:YAP laser system developed for this study includes a Tm:YAP laser resonator, diode laser driver, water chiller, modulation controller unit, and acquisition/control software. Full-thickness incisions on Wistar rat skin were welded by the Tm:YAP laser system at 100 mW and 5 s in both modulated and CW modes of operation (34.66 W/cm2). The skin samples were examined during a 21-day healing period by histology and tensile tests. The results were compared with the samples closed by conventional suture technique. For the laser groups, immediate closure at the surface layers of the incisions was observed. Full closures were observed for both modulated and CW modes of operation at day 4. The tensile forces for both modulated and CW modes of operation were found to be significantly higher than the values found by conventional suture technique. The 1980-nm Tm:YAP laser system operating in both modulated and CW modes maximizes the therapeutic effect while minimizing undesired side effects of laser tissue welding. Hence, it is a potentially important alternative tool to the conventional suturing technique.

  4. Computer-Aided Software Engineering - An approach to real-time software development

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  5. Flight Software Math Library

    NASA Technical Reports Server (NTRS)

    McComas, David

    2013-01-01

    The flight software (FSW) math library is a collection of reusable math components that provides typical math utilities required by spacecraft flight software. These utilities are intended to increase flight software quality reusability and maintainability by providing a set of consistent, well-documented, and tested math utilities. This library only has dependencies on ANSI C, so it is easily ported. Prior to this library, each mission typically created its own math utilities using ideas/code from previous missions. Part of the reason for this is that math libraries can be written with different strategies in areas like error handling, parameters orders, naming conventions, etc. Changing the utilities for each mission introduces risks and costs. The obvious risks and costs are that the utilities must be coded and revalidated. The hidden risks and costs arise in miscommunication between engineers. These utilities must be understood by both the flight software engineers and other subsystem engineers (primarily guidance navigation and control). The FSW math library is part of a larger goal to produce a library of reusable Guidance Navigation and Control (GN&C) FSW components. A GN&C FSW library cannot be created unless a standardized math basis is created. This library solves the standardization problem by defining a common feature set and establishing policies for the library s design. This allows the libraries to be maintained with the same strategy used in its initial development, which supports a library of reusable GN&C FSW components. The FSW math library is written for an embedded software environment in C. This places restrictions on the language features that can be used by the library. Another advantage of the FSW math library is that it can be used in the FSW as well as other environments like the GN&C analyst s simulators. This helps communication between the teams because they can use the same utilities with the same feature set and syntax.

  6. The "neuro-mapping locator" software. A real-time intraoperative objective paraesthesia mapping tool to evaluate paraesthesia coverage of the painful zone in patients undergoing spinal cord stimulation lead implantation.

    PubMed

    Guetarni, F; Rigoard, P

    2015-03-01

    Conventional spinal cord stimulation (SCS) generates paraesthesia, as the efficacy of this technique is based on the relationship between the paraesthesia provided by SCS on the painful zone and an analgesic effect on the stimulated zone. Although this basic postulate is based on clinical evidence, it is clear that this relationship has never been formally demonstrated by scientific studies. There is a need for objective evaluation tools ("transducers") to transpose electrical signals to clinical effects and to guide therapeutic choices. We have developed a software at Poitiers University hospital allowing real-time objective mapping of the paraesthesia generated by SCS lead placement and programming during the implantation procedure itself, on a touch screen interface. The purpose of this article is to describe this intraoperative mapping software, in terms of its concept and technical aspects. The Neuro-Mapping Locator (NML) software is dedicated to patients with failed back surgery syndrome, candidates for SCS lead implantation, to actively participate in the implantation procedure. Real-time geographical localization of the paraesthesia generated by percutaneous or multicolumn surgical SCS lead implanted under awake anaesthesia allows intraoperative lead programming and possibly lead positioning to be modified with the patient's cooperation. Software updates should enable us to refine objectives related to the use of this tool and minimize observational biases. The ultimate goals of NML software should not be limited to optimize one specific device implantation in a patient but also allow to compare instantaneously various stimulation strategies, by characterizing new technical parameters as "coverage efficacy" and "device specificity" on selected subgroups of patients. Another longer-term objective would be to organize these predictive factors into computer science ontologies, which could constitute robust and helpful data for device selection and programming of tomorrow's neurostimulators. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  7. OnlineTED.com--a novel web-based audience response system for higher education. A pilot study to evaluate user acceptance.

    PubMed

    Kühbeck, Felizian; Engelhardt, Stefan; Sarikas, Antonio

    2014-01-01

    Audience response (AR) systems are increasingly used in undergraduate medical education. However, high costs and complexity of conventional AR systems often limit their use. Here we present a novel AR system that is platform independent and does not require hardware clickers or additional software to be installed. "OnlineTED" was developed at Technische Universität München (TUM) based on Hypertext Preprocessor (PHP) with a My Structured Query Language (MySQL)-database as server- and Javascript as client-side programming languages. "OnlineTED" enables lecturers to create and manage question sets online and start polls in-class via a web-browser. Students can participate in the polls with any internet-enabled device (smartphones, tablet-PCs or laptops). A paper-based survey was conducted with undergraduate medical students and lecturers at TUM to compare "OnlineTED" with conventional AR systems using clickers. "OnlineTED" received above-average evaluation results by both students and lecturers at TUM and was seen on par or superior to conventional AR systems. The survey results indicated that up to 80% of students at TUM own an internet-enabled device (smartphone or tablet-PC) for participation in web-based AR technologies. "OnlineTED" is a novel web-based and platform-independent AR system for higher education that was well received by students and lecturers. As a non-commercial alternative to conventional AR systems it may foster interactive teaching in undergraduate education, in particular with large audiences.

  8. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  9. The Elements of an Effective Software Development Plan - Software Development Process Guidebook

    DTIC Science & Technology

    2011-11-11

    standards and practices required for all XMPL software development. This SDP implements the <corporate> Standard Software Process (SSP). as tailored...Developing and integrating reusable software products • Approach to managing COTS/Reuse software implementation • COTS/Reuse software selection...final selection and submit to change board for approval MAINTENANCE Monitor current products for obsolescence or end of support Track new

  10. Modeling in conventional and supra electroporation for model cell with organelles

    NASA Astrophysics Data System (ADS)

    Sulaeman, Muhammad Yangki; Widita, Rena

    2015-09-01

    Electroporation is a formation of pores in the membrane cell due to the external electric field applied to the cell. There are two types of electroporation, conventional and supra-electroporation. The purpose of creating pores in the cell using conventional electroporation are to increase the effectiveness of chemotherapy (electrochemotherapy) and to kill cancer tissue using irreversible electroporation. Supra-electroporation shows that it can induce electroporation in the organell inside the cell, so it can kill the cell by apoptosis mechanism. Modeling of electroporation phenomenon on a model cell had been done by using software COMSOL Multiphysics 4.3b with the applied external electric field used are 1.1 kV/cm for conventional electroporation and 60 kV/cm for supra-electroporation to find the difference between transmembrane voltage and pore density for both electroporation. It can be concluded from the results that there is a big difference between transmembrane voltage and pores density on conventional and supra electroporation on model cell.

  11. [Experimental research on the effect of nanophase ceramics on osteoblasts functions].

    PubMed

    Wen, Bo; Chen, Zhiqing; Jiang, Yinshan; Yang, Zhengwen; Xu, Yongzhong

    2005-06-01

    In order to study the cytocompatibility of nanophase hydroxyapatite ceramic in vitro, we prepared hydroxyapatite by use of the wet chemistry techniques. The grain size of hydroxyapatite of interest to the present study was determined by scanning electron microscopy and atomic force microscopy with image analysis software. Primary culture of osteoblast from rat calvaria was established. Protein content, synthesis of alkaline phosphatase and deposition of calcium-containing mineral by osteoblasts cultured on nanophase hydroxyapatite ceramics and on conventional hydroxyapatite ceramics for 7, 14, 21 and 28 days were examined. The results showed that the average surface grain size of the nanophase and that of the conventional HA compact formulations was 55 (nanophase) and 780 (conventional) nm, respectively. More importantly, compared to the synthesis of alkaline phosphatase and deposition of calcium-containing mineral by osteoblasts cultured on nanophase was significantly greater than that on conventional ceramics after 21 and 28 days. The cytocompatibility was significantly greater on nanophase HA than on conventional formulations of the same ceramic.

  12. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  13. Artificial intelligence approaches to software engineering

    NASA Technical Reports Server (NTRS)

    Johannes, James D.; Macdonald, James R.

    1988-01-01

    Artificial intelligence approaches to software engineering are examined. The software development life cycle is a sequence of not so well-defined phases. Improved techniques for developing systems have been formulated over the past 15 years, but pressure continues to attempt to reduce current costs. Software development technology seems to be standing still. The primary objective of the knowledge-based approach to software development presented in this paper is to avoid problem areas that lead to schedule slippages, cost overruns, or software products that fall short of their desired goals. Identifying and resolving software problems early, often in the phase in which they first occur, has been shown to contribute significantly to reducing risks in software development. Software development is not a mechanical process but a basic human activity. It requires clear thinking, work, and rework to be successful. The artificial intelligence approaches to software engineering presented support the software development life cycle through the use of software development techniques and methodologies in terms of changing current practices and methods. These should be replaced by better techniques that that improve the process of of software development and the quality of the resulting products. The software development process can be structured into well-defined steps, of which the interfaces are standardized, supported and checked by automated procedures that provide error detection, production of the documentation and ultimately support the actual design of complex programs.

  14. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  15. Gamma ray shielding and structural properties of PbO-P2O5-Na2WO4 glass system

    NASA Astrophysics Data System (ADS)

    Dogra, Mridula; Singh, K. J.; Kaur, Kulwinder; Anand, Vikas; Kaur, Parminder

    2017-05-01

    The present work has been undertaken to study the gamma ray shielding properties of PbO-P2O5-Na2WO4 glass system. The values of mass attenuation coefficient and half value layer parameter at photon energies 511, 662 and 1173 KeV have been determined using XCOM computer software developed by National Institute of Standards and Technology. The density, molar volume, XRD, UV-VIS and Raman studies have been performed to study the structural properties of the prepared glass system to check the possibility of the use of prepared samples as an alternate to conventional concrete for gamma ray shielding applications.

  16. Computational fluid dynamics (CFD) study on the fetal aortic coarctation

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Zhang, Yutao; Wang, Jingying

    2018-03-01

    Blood flows in normal and coarctate fetal aortas are simulated by the CFD technique using T-rex grids. The three-dimensional (3-D) digital model of the fetal arota is reconstructed by the computer-aided design (CAD) software based on two-dimensional (2-D) ultrasono tomographic images. Simulation results displays the development and enhancement of the secondary flow structure in the coarctate fetal arota. As the diameter narrow ratio rises greater than 45%, the pressure and wall shear stress (WSS) of the aorta arch increase exponentially, which is consistent with the conventional clinical concept. The present study also demonstrates that CFD is a very promising assistant technique to investigate human cardiovascular diseases.

  17. Stereoscopic image production: live, CGI, and integration

    NASA Astrophysics Data System (ADS)

    Criado, Enrique

    2006-02-01

    This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.

  18. High fold computer disk storage DATABASE for fast extended analysis of γ-rays events

    NASA Astrophysics Data System (ADS)

    Stézowski, O.; Finck, Ch.; Prévost, D.

    1999-03-01

    Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.

  19. High-stability Shuttle pointing system

    NASA Technical Reports Server (NTRS)

    Van Riper, R.

    1981-01-01

    It was recognized that precision pointing provided by the Orbiter's attitude control system would not be good enough for Shuttle payload scientific experiments or certain Defense department payloads. The Annular Suspension Pointing System (ASPS) is being developed to satisfy these more exacting pointing requirements. The ASPS is a modular pointing system which consists of two principal parts, including an ASPS Gimbal System (AGS) which provides three conventional ball-bearing gimbals and an ASPS Vernier System (AVS) which magnetically isolates the payload. AGS performance requirements are discussed and an AGS system description is given. The overall AGS system consists of the mechanical hardware, sensors, electronics, and software. Attention is also given to system simulation and performance prediction, and support facilities.

  20. Applying Standard Independent Verification and Validation (IVV) Techniques Within an Agile Framework: Is There a Compatibility Issue?

    NASA Technical Reports Server (NTRS)

    Dabney, James B.; Arthur, James Douglas

    2017-01-01

    Agile methods have gained wide acceptance over the past several years, to the point that they are now a standard management and execution approach for small-scale software development projects. While conventional Agile methods are not generally applicable to large multi-year and mission-critical systems, Agile hybrids are now being developed (such as SAFe) to exploit the productivity improvements of Agile while retaining the necessary process rigor and coordination needs of these projects. From the perspective of Independent Verification and Validation (IVV), however, the adoption of these hybrid Agile frameworks is becoming somewhat problematic. Hence, we find it prudent to question the compatibility of conventional IVV techniques with (hybrid) Agile practices.This paper documents our investigation of (a) relevant literature, (b) the modification and adoption of Agile frameworks to accommodate the development of large scale, mission critical systems, and (c) the compatibility of standard IVV techniques within hybrid Agile development frameworks. Specific to the latter, we found that the IVV methods employed within a hybrid Agile process can be divided into three groups: (1) early lifecycle IVV techniques that are fully compatible with the hybrid lifecycles, (2) IVV techniques that focus on tracing requirements, test objectives, etc. are somewhat incompatible, but can be tailored with a modest effort, and (3) IVV techniques involving an assessment requiring artifact completeness that are simply not compatible with hybrid Agile processes, e.g., those that assume complete requirement specification early in the development lifecycle.

  1. Automatic on-line detection system design research on internal defects of metal materials based on optical fiber F-P sensing technology

    NASA Astrophysics Data System (ADS)

    Xia, Liu; Shan, Ning; Chao, Ban; Caoshan, Wang

    2016-10-01

    Metal materials have been used in aerospace and other industrial fields widely because of its excellent characteristics, so its internal defects detection is very important. Ultrasound technology is used widely in the fields of nondestructive detection because of its excellent characteristic. But the conventional detection instrument for ultrasound, which has shortcomings such as low intelligent level and long development cycles, limits its development. In this paper, the theory of ultrasound detection is analyzed. A computational method of the defects distributional position is given. The non-contact type optical fiber F-P interference cavity structure is designed and the length of origin cavity is given. The real-time on-line ultrasound detecting experiment devices for internal defects of metal materials is established based on the optical fiber F-P sensing system. The virtual instrument of automation ultrasound detection internal defects is developed based on LabVIEW software and the experimental study is carried out. The results show that this system can be used in internal defect real-time on-line locating of engineering structures effectively. This system has higher measurement precision. Relative error is 6.7%. It can be met the requirement of engineering practice. The system is characterized by simple operation, easy realization. The software has a friendly interface, good expansibility, and high intelligent level.

  2. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  3. Agile Software Development

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2008-01-01

    Many software development firms are now adopting the agile software development method. This method involves the customer at every level of software development, thus reducing the impact of change in the requirement at a later stage. In this article, the principles of the agile method for software development are explored and there is a focus on…

  4. Development of a comprehensive software engineering environment

    NASA Technical Reports Server (NTRS)

    Hartrum, Thomas C.; Lamont, Gary B.

    1987-01-01

    The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.

  5. Reducing Risk in DoD Software-Intensive Systems Development

    DTIC Science & Technology

    2016-03-01

    intensive systems development risk. This research addresses the use of the Technical Readiness Assessment (TRA) using the nine-level software Technology...The software TRLs are ineffective in reducing technical risk for the software component development. • Without the software TRLs, there is no...effective method to perform software TRA or reduce the technical development risk. The software component will behave as a new, untried technology in nearly

  6. [Cytocompatibility of nanophase hydroxyapatite ceramics].

    PubMed

    Wen, Bo; Chen, Zhi-qing; Jiang, Yin-shan; Yang, Zheng-wen; Xu, Yong-zhong

    2004-12-01

    To evaluate the cytocompatibility of nanophase hydroxyapatite ceramics in vitro. Hydroxyapatite (HA) was prepared via wet method. The grain size of the hydroxyapatite in the study was determined by scanning electron microscope and atomic force microscope with image analysis software. Primary osteoblast culture was established from rat calvaria. Cell adherence and proliferation on nanophase hydroxyapatite ceramics and conventional hydroxyapatite ceramics were examined at 1, 3, 5, 7 days. Morphology of the cells was observed by microscope. The average grain size of the nanophase and conventional HA was 55 nm and 780 nm, respectively. Throughout 7 days period, osteoblast proliferation on the HA was similar to that on tissue culture borosilicate glass controls, osteoblasts could attach, spread and proliferate on HA. However, compared to conventional ceramics, osteoblast proliferation on nanophase HA was significantly better after 1, 3, 5 and 7 days. Cytocompatibility of nanophase HA was significantly better than conventional ceramics.

  7. Population pharmacokinetics of busulfan in pediatric and young adult patients undergoing hematopoietic cell transplant: a model-based dosing algorithm for personalized therapy and implementation into routine clinical use.

    PubMed

    Long-Boyle, Janel R; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J; Dvorak, Christopher C

    2015-04-01

    Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared with conventional dose guidelines. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration at steady state) and implement a simple model-based tool for the initial dosing of busulfan in children undergoing hematopoietic cell transplantation. Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone hematopoietic cell transplantation with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the nonlinear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Modeling of busulfan time-concentration data indicates that busulfan clearance displays nonlinearity in children, decreasing up to approximately 20% between the concentrations of 250-2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan clearance were actual body weight and age. The percentage of individuals achieving a therapeutic concentration at steady state was significantly higher in subjects receiving initial doses based on the population PK model (81%) than in historical controls dosed on conventional guidelines (52%) (P = 0.02). When compared with the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults.

  8. Lessons in modern digital field geology: Open source software, 3D techniques, and the new world of digital mapping

    NASA Astrophysics Data System (ADS)

    Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura

    2014-05-01

    Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (<300Euros) Windows tablets. However, the lack of flexibility in data structure makes for cumbersome workflows when trying to interface our existing shapefile-centric data structures to Move. Nonetheless, in spring 2014 we will experiment with full-3D immersion in the field using the full Move software package in combination with ground based LiDAR and photogrammetry. One new workflow suggested by our initial experiments is that field geologists should consider using photogrammetry software to capture 3D visualizations of key outcrops. This process is now straightforward in several software packages, and it affords a previously unheard of potential for communicating the complexity of key exposures. For example, in studies of metamorphic structures we often search for days to find "Rosetta Stone" outcrops that display key geometric relationships. While conventional photographs rarely can capture the essence of the field exposure, capturing a true 3D representation of the exposure with multiple photos from many orientations can solve this communication problem. As spatial databases evolve these 3D models should be readily importable into the database.

  9. Improving the strength of additively manufactured objects via modified interior structure

    NASA Astrophysics Data System (ADS)

    Al, Can Mert; Yaman, Ulas

    2017-10-01

    Additive manufacturing (AM), in other words 3D printing, is becoming more common because of its crucial advantages such as geometric complexity, functional interior structures, etc. over traditional manufacturing methods. Especially, Fused Filament Fabrication (FFF) 3D printing technology is frequently used because of the fact that desktop variants of these types of printers are highly appropriate for different fields and are improving rapidly. In spite of the fact that there are significant advantages of AM, the strength of the parts fabricated with AM is still a major problem especially when plastic materials, such as Acrylonitrile butadiene styrene (ABS), Polylactic acid (PLA), Nylon, etc., are utilized. In this study, an alternative method is proposed in which the strength of AM fabricated parts is improved employing direct slicing approach. Traditional Computer Aided Manufacturing (CAM) software of 3D printers takes only the geometry as an input in triangular mesh form (stereolithography, STL file) generated by Computer Aided Design software. This file format includes data only about the outer boundaries of the geometry. Interior of the artifacts are manufactured with homogeneous infill patterns, such as diagonal, honeycomb, linear, etc. according to the paths generated in CAM software. The developed method within this study provides a way to fabricate parts with heterogeneous infill patterns by utilizing the stress field data obtained from a Finite Element Analysis software, such as ABAQUS. According to the performed tensile tests, the strength of the test specimen is improved by about 45% compared to the conventional way of 3D printing.

  10. Perceptions of clinical utility of an Augmented Reality musical software among health care professionals.

    PubMed

    Corrêa, Ana Grasielle Dionísio; de Assis, Gilda Aparecida; do Nascimento, Marilena; de Deus Lopes, Roseli

    2017-04-01

    Augmented Reality musical software (GenVirtual) is a technology, which primarily allows users to develop music activities for rehabilitation. This study aimed to analyse the perceptions of health care professionals regarding the clinical utility of GenVirtual. A second objective was to identify improvements to GenVirtual software and similar technologies. Music therapists, occupational therapists, physiotherapists and speech and language therapist who assist people with physical and cognitive disabilities were enrolled in three focus groups. The quantitative and qualitative data were collected through inductive thematic analysis. Three main themes were identified: the use of GenVirtual in health care areas; opportunities for realistic application of GenVirtual; and limitations in the use of GenVirtual. The registration units identified were: motor stimulation, cognitive stimulation, verbal learning, recreation activity, musicality, accessibility, motivation, sonic accuracy, interference of lighting, poor sound, children and adults. This research suggested that the GenVirtual is a complementary tool to conventional clinical practice and has great potential to motor and cognitive rehabilitation of children and adults. Implications for Rehabilitation Gaining health professional' perceptions of the Augmented Reality musical game (GenVirtual) give valuable information as to the clinical utility of the software. GenVirtual was perceived as a tool that could be used as enhancing the motor and cognitive rehabilitation process. GenVirtual was viewed as a tool that could enhance clinical practice and communication among various agencies, but it was suggested that it should be used with caution to avoid confusion and replacement of important services.

  11. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  12. Evolution of Secondary Software Businesses: Understanding Industry Dynamics

    NASA Astrophysics Data System (ADS)

    Tyrväinen, Pasi; Warsta, Juhani; Seppänen, Veikko

    Primary software industry originates from IBM's decision to unbundle software-related computer system development activities to external partners. This kind of outsourcing from an enterprise internal software development activity is a common means to start a new software business serving a vertical software market. It combines knowledge of the vertical market process with competence in software development. In this research, we present and analyze the key figures of the Finnish secondary software industry, in order to quantify its interaction with the primary software industry during the period of 2000-2003. On the basis of the empirical data, we present a model for evolution of a secondary software business, which makes explicit the industry dynamics. It represents the shift from internal software developed for competitive advantage to development of products supporting standard business processes on top of standardized technologies. We also discuss the implications for software business strategies in each phase.

  13. Perk Station – Percutaneous Surgery Training and Performance Measurement Platform

    PubMed Central

    Vikal, Siddharth; U-Thainual, Paweena; Carrino, John A.; Iordachita, Iulian; Fischer, Gregory S.; Fichtinger, Gabor

    2009-01-01

    Motivation Image-guided percutaneous (through the skin) needle-based surgery has become part of routine clinical practice in performing procedures such as biopsies, injections and therapeutic implants. A novice physician typically performs needle interventions under the supervision of a senior physician; a slow and inherently subjective training process that lacks objective, quantitative assessment of the surgical skill and performance[S1]. Shortening the learning curve and increasing procedural consistency are important factors in assuring high-quality medical care. Methods This paper describes a laboratory validation system, called Perk Station, for standardized training and performance measurement under different assistance techniques for needle-based surgical guidance systems. The initial goal of the Perk Station is to assess and compare different techniques: 2D image overlay, biplane laser guide, laser protractor and conventional freehand. The main focus of this manuscript is the planning and guidance software system developed on the 3D Slicer platform, a free, open source software package designed for visualization and analysis of medical image data. Results The prototype Perk Station has been successfully developed, the associated needle insertion phantoms were built, and the graphical user interface was fully implemented. The system was inaugurated in undergraduate teaching and a wide array of outreach activities. Initial results, experiences, ongoing activities and future plans are reported. PMID:19539446

  14. MEDIC: medical embedded device for individualized care.

    PubMed

    Wu, Winston H; Bui, Alex A T; Batalin, Maxim A; Au, Lawrence K; Binney, Jonathan D; Kaiser, William J

    2008-02-01

    Presented work highlights the development and initial validation of a medical embedded device for individualized care (MEDIC), which is based on a novel software architecture, enabling sensor management and disease prediction capabilities, and commercially available microelectronic components, sensors and conventional personal digital assistant (PDA) (or a cell phone). In this paper, we present a general architecture for a wearable sensor system that can be customized to an individual patient's needs. This architecture is based on embedded artificial intelligence that permits autonomous operation, sensor management and inference, and may be applied to a general purpose wearable medical diagnostics. A prototype of the system has been developed based on a standard PDA and wireless sensor nodes equipped with commercially available Bluetooth radio components, permitting real-time streaming of high-bandwidth data from various physiological and contextual sensors. We also present the results of abnormal gait diagnosis using the complete system from our evaluation, and illustrate how the wearable system and its operation can be remotely configured and managed by either enterprise systems or medical personnel at centralized locations. By using commercially available hardware components and software architecture presented in this paper, the MEDIC system can be rapidly configured, providing medical researchers with broadband sensor data from remote patients and platform access to best adapt operation for diagnostic operation objectives.

  15. Agreement between Computerized and Human Assessment of Performance on the Ruff Figural Fluency Test

    PubMed Central

    Elderson, Martin F.; Pham, Sander; van Eersel, Marlise E. A.; Wolffenbuttel, Bruce H. R.; Kok, Johan; Gansevoort, Ron T.; Tucha, Oliver; van der Klauw, Melanie M.; Slaets, Joris P. J.

    2016-01-01

    The Ruff Figural Fluency Test (RFFT) is a sensitive test for nonverbal fluency suitable for all age groups. However, assessment of performance on the RFFT is time-consuming and may be affected by interrater differences. Therefore, we developed computer software specifically designed to analyze performance on the RFFT by automated pattern recognition. The aim of this study was to compare assessment by the new software with conventional assessment by human raters. The software was developed using data from the Lifelines Cohort Study and validated in an independent cohort of the Prevention of Renal and Vascular End Stage Disease (PREVEND) study. The total study population included 1,761 persons: 54% men; mean age (SD), 58 (10) years. All RFFT protocols were assessed by the new software and two independent human raters (criterion standard). The mean number of unique designs (SD) was 81 (29) and the median number of perseverative errors (interquartile range) was 9 (4 to 16). The intraclass correlation coefficient (ICC) between the computerized and human assessment was 0.994 (95%CI, 0.988 to 0.996; p<0.001) and 0.991 (95%CI, 0.990 to 0.991; p<0.001) for the number of unique designs and perseverative errors, respectively. The mean difference (SD) between the computerized and human assessment was -1.42 (2.78) and +0.02 (1.94) points for the number of unique designs and perseverative errors, respectively. This was comparable to the agreement between two independent human assessments: ICC, 0.995 (0.994 to 0.995; p<0.001) and 0.985 (0.982 to 0.988; p<0.001), and mean difference (SD), -0.44 (2.98) and +0.56 (2.36) points for the number of unique designs and perseverative errors, respectively. We conclude that the agreement between the computerized and human assessment was very high and comparable to the agreement between two independent human assessments. Therefore, the software is an accurate tool for the assessment of performance on the RFFT. PMID:27661083

  16. GENERAL PURPOSE ADA PACKAGES

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package requires 205K of main memory on a DEC VAX running VMS. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  17. A practical implementation for a data dictionary in an environment of diverse data sets

    USGS Publications Warehouse

    Sprenger, Karla K.; Larsen, Dana M.

    1993-01-01

    The need for a data dictionary database at the U.S. Geological Survey's EROS Data Center (EDC) was reinforced with the Earth Observing System Data and Information System (EOSDIS) requirement for consistent field definitions of data sets residing at more than one archive center. The EDC requirement addresses the existence of multiple sets with identical field definitions using various naming conventions. The EDC is developing a data dictionary database to accomplish the following foals: to standardize field names for ease in software development; to facilitate querying and updating of the date; and to generate ad hoc reports. The structure of the EDC electronic data dictionary database supports different metadata systems as well as many different data sets. A series of reports is used to keep consistency among data sets and various metadata systems.

  18. Guidelines for preparing software user documentation

    NASA Technical Reports Server (NTRS)

    Miller, Diane F.

    1987-01-01

    Clear, easy-to-use software user's manuals make strong demands on special technical communication techniques. Principles and guidelines are given for analyzing the audience and dealing with wide-ranging backgrounds of potential users. Types of information to be included in a complete manual are suggested, with a technique for creating a user-oriented rather than process-oriented organization. Accuracy verification is emphasized. Simple tips are gievn for formatting for quick comprehension and reference, for deciding on packaging, for creating helpful illustrations and examples, and for setting up clear and consistent conventions. Simple guidelines are offered for writing clearly and concisely and for editing.

  19. CONTIN XPCS: Software for Inverse Transform Analysis of X-Ray Photon Correlation Spectroscopy Dynamics

    PubMed Central

    Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements. PMID:29875507

  20. CONTIN XPCS: Software for Inverse Transform Analysis of X-Ray Photon Correlation Spectroscopy Dynamics.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements.

  1. The development and technology transfer of software engineering technology at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Pitman, C. L.; Erb, D. M.; Izygon, M. E.; Fridge, E. M., III; Roush, G. B.; Braley, D. M.; Savely, R. T.

    1992-01-01

    The United State's big space projects of the next decades, such as Space Station and the Human Exploration Initiative, will need the development of many millions of lines of mission critical software. NASA-Johnson (JSC) is identifying and developing some of the Computer Aided Software Engineering (CASE) technology that NASA will need to build these future software systems. The goal is to improve the quality and the productivity of large software development projects. New trends are outlined in CASE technology and how the Software Technology Branch (STB) at JSC is endeavoring to provide some of these CASE solutions for NASA is described. Key software technology components include knowledge-based systems, software reusability, user interface technology, reengineering environments, management systems for the software development process, software cost models, repository technology, and open, integrated CASE environment frameworks. The paper presents the status and long-term expectations for CASE products. The STB's Reengineering Application Project (REAP), Advanced Software Development Workstation (ASDW) project, and software development cost model (COSTMODL) project are then discussed. Some of the general difficulties of technology transfer are introduced, and a process developed by STB for CASE technology insertion is described.

  2. Reciprocal Cost Allocation and Decision Making for Universities.

    ERIC Educational Resources Information Center

    Metzger, Lawrence M.

    1994-01-01

    Examines the use of the reciprocal method as an alternative to more conventional methods of university service department cost allocation. This method can be used with software that is readily available and with already known data. Reciprocal cost allocation will provide appropriate allocation values for financial reporting and data for university…

  3. Utility of the summation chromatographic peak integration function to avoid manual reintegrations in the analysis of targeted analytes

    USDA-ARS?s Scientific Manuscript database

    As sample preparation and analytical techniques have improved, data handling has become the main limitation in automated high-throughput analysis of targeted chemicals in many applications. Conventional chromatographic peak integration functions rely on complex software and settings, but untrustwor...

  4. Using Visualization and Computation in the Analysis of Separation Processes

    ERIC Educational Resources Information Center

    Joo, Yong Lak; Choudhary, Devashish

    2006-01-01

    For decades, every chemical engineer has been asked to have a background in separations. The required separations course can, however, be uninspiring and superficial because understanding many separation processes involves conventional graphical methods and commercial process simulators. We utilize simple, user-­friendly mathematical software,…

  5. Software Development as Music Education Research

    ERIC Educational Resources Information Center

    Brown, Andrew R.

    2007-01-01

    This paper discusses how software development can be used as a method for music education research. It explains how software development can externalize ideas, stimulate action and reflection, and provide evidence to support the educative value of new software-based experiences. Parallels between the interactive software development process and…

  6. 77 FR 50724 - Developing Software Life Cycle Processes for Digital Computer Software Used in Safety Systems of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes for Digital... Software Life Cycle Processes for Digital Computer Software used in Safety Systems of Nuclear Power Plants... clarifications, the enhanced consensus practices for developing software life-cycle processes for digital...

  7. Generic domain models in software engineering

    NASA Technical Reports Server (NTRS)

    Maiden, Neil

    1992-01-01

    This paper outlines three research directions related to domain-specific software development: (1) reuse of generic models for domain-specific software development; (2) empirical evidence to determine these generic models, namely elicitation of mental knowledge schema possessed by expert software developers; and (3) exploitation of generic domain models to assist modelling of specific applications. It focuses on knowledge acquisition for domain-specific software development, with emphasis on tool support for the most important phases of software development.

  8. Photometric Modeling of Simulated Surace-Resolved Bennu Images

    NASA Astrophysics Data System (ADS)

    Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.

    2017-12-01

    The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.

  9. Cartilage quantification using contrast-enhanced MRI in the wrist of rheumatoid arthritis: cartilage loss is associated with bone marrow edema.

    PubMed

    Fujimori, Motoshi; Nakamura, Satoko; Hasegawa, Kiminori; Ikeno, Kunihiro; Ichikawa, Shota; Sutherland, Kenneth; Kamishima, Tamotsu

    2017-08-01

    To quantify wrist cartilage using contrast MRI and compare with the extent of adjacent synovitis and bone marrow edema (BME) in patients with rheumatoid arthritis (RA). 18 patients with RA underwent post-contrast fat-suppressed T 1 weighted coronal imaging. Cartilage area at the centre of the scaphoid-capitate and radius-scaphoid joints was measured by in-house developed software. We defined cartilage as the pixels with signal intensity between two thresholds (lower: 0.4, 0.5 and 0.6 times the muscle signal, upper: 0.9, 1.0, 1.1, 1.2 and 1.3 times the muscle signal). We investigated the association of cartilage loss with synovitis and BME score derived from RA MRI scoring system. Cartilage area was correlated with BME score when thresholds were adequately set with lower threshold at 0.6 times the muscle signal and upper threshold at 1.2 times the muscle signal for both SC (r s =-0.469, p < 0.05) and RS (r s =-0.486, p < 0.05) joints, while it showed no significant correlation with synovitis score at any thresholds. Our software can accurately quantify cartilage in the wrist and BME associated with cartilage loss in patients with RA. Advances in knowledge: Our software can quantify cartilage using conventional MR images of the wrist. BME is associated with cartilage loss in RA patients.

  10. Omics Metadata Management Software (OMMS).

    PubMed

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.

  11. Recording and assessment of evoked potentials with electrode arrays.

    PubMed

    Miljković, N; Malešević, N; Kojić, V; Bijelić, G; Keller, T; Popović, D B

    2015-09-01

    In order to optimize procedure for the assessment of evoked potentials and to provide visualization of the flow of action potentials along the motor systems, we introduced array electrodes for stimulation and recording and developed software for the analysis of the recordings. The system uses a stimulator connected to an electrode array for the generation of evoked potentials, an electrode array connected to the amplifier, A/D converter and computer for the recording of evoked potentials, and a dedicated software application. The method has been tested for the assessment of the H-reflex on the triceps surae muscle in six healthy humans. The electrode array with 16 pads was positioned over the posterior aspect of the thigh, while the recording electrode array with 16 pads was positioned over the triceps surae muscle. The stimulator activated all the pads of the stimulation electrode array asynchronously, while the signals were recorded continuously at all the recording sites. The results are topography maps (spatial distribution of evoked potentials) and matrices (spatial visualization of nerve excitability). The software allows the automatic selection of the lowest stimulation intensity to achieve maximal H-reflex amplitude and selection of the recording/stimulation pads according to predefined criteria. The analysis of results shows that the method provides rich information compared with the conventional recording of the H-reflex with regard the spatial distribution.

  12. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  13. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability

    PubMed Central

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.

    2017-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125

  14. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.

    PubMed

    Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R

    2016-07-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  15. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    NASA Technical Reports Server (NTRS)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  16. Evaluation of three electronic report processing systems for preparing hydrologic reports of the U.S Geological Survey, Water Resources Division

    USGS Publications Warehouse

    Stiltner, G.J.

    1990-01-01

    In 1987, the Water Resources Division of the U.S. Geological Survey undertook three pilot projects to evaluate electronic report processing systems as a means to improve the quality and timeliness of reports pertaining to water resources investigations. The three projects selected for study included the use of the following configuration of software and hardware: Ventura Publisher software on an IBM model AT personal computer, PageMaker software on a Macintosh computer, and FrameMaker software on a Sun Microsystems workstation. The following assessment criteria were to be addressed in the pilot studies: The combined use of text, tables, and graphics; analysis of time; ease of learning; compatibility with the existing minicomputer system; and technical limitations. It was considered essential that the camera-ready copy produced be in a format suitable for publication. Visual improvement alone was not a consideration. This report consolidates and summarizes the findings of the electronic report processing pilot projects. Text and table files originating on the existing minicomputer system were successfully transformed to the electronic report processing systems in American Standard Code for Information Interchange (ASCII) format. Graphics prepared using a proprietary graphics software package were transferred to all the electronic report processing software through the use of Computer Graphic Metafiles. Graphics from other sources were entered into the systems by scanning paper images. Comparative analysis of time needed to process text and tables by the electronic report processing systems and by conventional methods indicated that, although more time is invested in creating the original page composition for an electronically processed report , substantial time is saved in producing subsequent reports because the format can be stored and re-used by electronic means as a template. Because of the more compact page layouts, costs of printing the reports were 15% to 25% less than costs of printing the reports prepared by conventional methods. Because the largest report workload in the offices conducting water resources investigations is preparation of Water-Resources Investigations Reports, Open-File Reports, and annual State Data Reports, the pilot studies only involved these projects. (USGS)

  17. The Computational Infrastructure for Geodynamics: An Example of Software Curation and Citation in the Geodynamics Community

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2017-12-01

    Curation of software promotes discoverability and accessibility and works hand in hand with scholarly citation to ascribe value to, and provide recognition for software development. To meet this challenge, the Computational Infrastructure for Geodynamics (CIG) maintains a community repository built on custom and open tools to promote discovery, access, identification, credit, and provenance of research software for the geodynamics community. CIG (geodynamics.org) originated from recognition of the tremendous effort required to develop sound software and the need to reduce duplication of effort and to sustain community codes. CIG curates software across 6 domains and has developed and follows software best practices that include establishing test cases, documentation, and a citable publication for each software package. CIG software landing web pages provide access to current and past releases; many are also accessible through the CIG community repository on github. CIG has now developed abc - attribution builder for citation to enable software users to give credit to software developers. abc uses zenodo as an archive and as the mechanism to obtain a unique identifier (DOI) for scientific software. To assemble the metadata, we searched the software's documentation and research publications and then requested the primary developers to verify. In this process, we have learned that each development community approaches software attribution differently. The metadata gathered is based on guidelines established by groups such as FORCE11 and OntoSoft. The rollout of abc is gradual as developers are forward-looking, rarely willing to go back and archive prior releases in zenodo. Going forward all actively developed packages will utilize the zenodo and github integration to automate the archival process when a new release is issued. How to handle legacy software, multi-authored libraries, and assigning roles to software remain open issues.

  18. SEI Software Engineering Education Directory.

    DTIC Science & Technology

    1987-02-01

    Software Design and Development Gilbert. Philip Systems: CDC Cyber 170/750 CDC Cyber 170760 DEC POP 11/44 PRIME AT&T 3B5 IBM PC IBM XT IBM RT...Macintosh VAx 8300 Software System Development and Laboratory CS 480/480L U P X T Textbooks: Software Design and Development Gilbert, Philip Systems: CDC...Acting Chair (618) 692-2386 Courses: Software Design and Development CS 424 U P E Y Textbooks: Software Design and Development, Gilbert, Philip Topics

  19. Sizing and phenotyping of cellular vesicles using Nanoparticle Tracking Analysis

    PubMed Central

    Dragovic, Rebecca A.; Gardiner, Christopher; Brooks, Alexandra S.; Tannetta, Dionne S.; Ferguson, David J.P.; Hole, Patrick; Carr, Bob; Redman, Christopher W.G.; Harris, Adrian L.; Dobson, Peter J.; Harrison, Paul; Sargent, Ian L.

    2011-01-01

    Cellular microvesicles and nanovesicles (exosomes) are involved in many disease processes and have major potential as biomarkers. However, developments in this area are constrained by limitations in the technology available for their measurement. Here we report on the use of fluorescence nanoparticle tracking analysis (NTA) to rapidly size and phenotype cellular vesicles. In this system vesicles are visualized by light scattering using a light microscope. A video is taken, and the NTA software tracks the brownian motion of individual vesicles and calculates their size and total concentration. Using human placental vesicles and plasma, we have demonstrated that NTA can measure cellular vesicles as small as ∼50 nm and is far more sensitive than conventional flow cytometry (lower limit ∼300 nm). By combining NTA with fluorescence measurement we have demonstrated that vesicles can be labeled with specific antibody-conjugated quantum dots, allowing their phenotype to be determined. From the Clinical Editor The authors of this study utilized fluorescence nanoparticle tracking analysis (NTA) to rapidly size and phenotype cellular vesicles, demonstrating that NTA is far more sensitive than conventional flow cytometry. PMID:21601655

  20. Monthly streamflow forecasting with auto-regressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  1. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  2. Fast interrupt platform for extended DOS

    NASA Technical Reports Server (NTRS)

    Duryea, T. W.

    1995-01-01

    Extended DOS offers the unique combination of a simple operating system which allows direct access to the interrupt tables, 32 bit protected mode access to 4096 MByte address space, and the use of industry standard C compilers. The drawback is that fast interrupt handling requires both 32 bit and 16 bit versions of each real-time process interrupt handler to avoid mode switches on the interrupts. A set of tools has been developed which automates the process of transforming the output of a standard 32 bit C compiler to 16 bit interrupt code which directly handles the real mode interrupts. The entire process compiles one set of source code via a make file, which boosts productivity by making the management of the compile-link cycle very simple. The software components are in the form of classes written mostly in C. A foreground process written as a conventional application which can use the standard C libraries can communicate with the background real-time classes via a message passing mechanism. The platform thus enables the integration of high performance real-time processing into a conventional application framework.

  3. A low-cost digital filing system for echocardiography data with MPEG4 compression and its application to remote diagnosis.

    PubMed

    Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto

    2004-12-01

    The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost.

  4. Fabrication of cooled radial turbine rotor

    NASA Technical Reports Server (NTRS)

    Hammer, A. N.; Aigret, G. G.; Psichogios, T. P.; Rodgers, C.

    1986-01-01

    A design and fabrication program was conducted to evaluate a unique concept for constructing a cooled, high temperature radial turbine rotor. This concept, called split blade fabrication was developed as an alternative to internal ceramic coring. In this technique, the internal cooling cavity is created without flow dividers or any other detail by a solid (and therefore stronger) ceramic plate which can be more firmly anchored within the casting shell mold than can conventional detailed ceramic cores. Casting is conducted in the conventional manner, except that the finished product, instead of having finished internal cooling passages, is now a split blade. The internal details of the blade are created separately together with a carrier sheet. The inserts are superalloy. Both are produced by essentially the same software such that they are a net fit. The carrier assemblies are loaded into the split blade and the edges sealed by welding. The entire wheel is Hot Isostatic Pressed (HIPed), braze bonding the internal details to the inside of the blades. During this program, two wheels were successfully produced by the split blade fabrication technique.

  5. Dynamic modeling and verification of an energy-efficient greenhouse with an aquaponic system using TRNSYS

    NASA Astrophysics Data System (ADS)

    Amin, Majdi Talal

    Currently, there is no integrated dynamic simulation program for an energy efficient greenhouse coupled with an aquaponic system. This research is intended to promote the thermal management of greenhouses in order to provide sustainable food production with the lowest possible energy use and material waste. A brief introduction of greenhouses, passive houses, energy efficiency, renewable energy systems, and their applications are included for ready reference. An experimental working scaled-down energy-efficient greenhouse was built to verify and calibrate the results of a dynamic simulation model made using TRNSYS software. However, TRNSYS requires the aid of Google SketchUp to develop 3D building geometry. The simulation model was built following the passive house standard as closely as possible. The new simulation model was then utilized to design an actual greenhouse with Aquaponics. It was demonstrated that the passive house standard can be applied to improve upon conventional greenhouse performance, and that it is adaptable to different climates. The energy-efficient greenhouse provides the required thermal environment for fish and plant growth, while eliminating the need for conventional cooling and heating systems.

  6. Quantification of 235U and 238U activity concentrations for undeclared nuclear materials by a digital gamma-gamma coincidence spectroscopy.

    PubMed

    Zhang, Weihua; Yi, Jing; Mekarski, Pawel; Ungar, Kurt; Hauck, Barry; Kramer, Gary H

    2011-06-01

    The purpose of this study is to investigate the possibility of verifying depleted uranium (DU), natural uranium (NU), low enriched uranium (LEU) and high enriched uranium (HEU) by a developed digital gamma-gamma coincidence spectroscopy. The spectroscopy consists of two NaI(Tl) scintillators and XIA LLC Digital Gamma Finder (DGF)/Pixie-4 software and card package. The results demonstrate that the spectroscopy provides an effective method of (235)U and (238)U quantification based on the count rate of their gamma-gamma coincidence counting signatures. The main advantages of this approach over the conventional gamma spectrometry include the facts of low background continuum near coincident signatures of (235)U and (238)U, less interference from other radionuclides by the gamma-gamma coincidence counting, and region-of-interest (ROI) imagine analysis for uranium enrichment determination. Compared to conventional gamma spectrometry, the method offers additional advantage of requiring minimal calibrations for (235)U and (238)U quantification at different sample geometries. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  7. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  8. A Prototype for the Support of Integrated Software Process Development and Improvement

    NASA Astrophysics Data System (ADS)

    Porrawatpreyakorn, Nalinpat; Quirchmayr, Gerald; Chutimaskul, Wichian

    An efficient software development process is one of key success factors for quality software. Not only can the appropriate establishment but also the continuous improvement of integrated project management and of the software development process result in efficiency. This paper hence proposes a software process maintenance framework which consists of two core components: an integrated PMBOK-Scrum model describing how to establish a comprehensive set of project management and software engineering processes and a software development maturity model advocating software process improvement. Besides, a prototype tool to support the framework is introduced.

  9. Four applications of a software data collection and analysis methodology

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.

  10. Confocal microscopy with strip mosaicing for rapid imaging over large areas of excised tissue

    PubMed Central

    Li, Yongbiao; Larson, Bjorg; Peterson, Gary; Seltzer, Emily; Toledo-Crow, Ricardo; Rajadhyaksha, Milind

    2013-01-01

    Abstract. Confocal mosaicing microscopy is a developing technology platform for imaging tumor margins directly in freshly excised tissue, without the processing required for conventional pathology. Previously, mosaicing on 12-×-12  mm2 of excised skin tissue from Mohs surgery and detection of basal cell carcinoma margins was demonstrated in 9 min. Last year, we reported the feasibility of a faster approach called “strip mosaicing,” which was demonstrated on a 10-×-10  mm2 of tissue in 3 min. Here we describe further advances in instrumentation, software, and speed. A mechanism was also developed to flatten tissue in order to enable consistent and repeatable acquisition of images over large areas. We demonstrate mosaicing on 10-×-10  mm2 of skin tissue with 1-μm lateral resolution in 90 s. A 2.5-×-3.5  cm2 piece of breast tissue was scanned with 0.8-μm lateral resolution in 13 min. Rapid mosaicing of confocal images on large areas of fresh tissue potentially offers a means to perform pathology at the bedside. Imaging of tumor margins with strip mosaicing confocal microscopy may serve as an adjunct to conventional (frozen or fixed) pathology for guiding surgery. PMID:23389736

  11. GUI for Coordinate Measurement of an Image for the Estimation of Geometric Distortion of an Opto-electronic Display System

    NASA Astrophysics Data System (ADS)

    Saini, Surender Singh; Sardana, Harish Kumar; Pattnaik, Shyam Sundar

    2017-06-01

    Conventional image editing software in combination with other techniques are not only difficult to apply to an image but also permits a user to perform some basic functions one at a time. However, image processing algorithms and photogrammetric systems are developed in the recent past for real-time pattern recognition applications. A graphical user interface (GUI) is developed which can perform multiple functions simultaneously for the analysis and estimation of geometric distortion in an image with reference to the corresponding distorted image. The GUI measure, record, and visualize the performance metric of X/Y coordinates of one image over the other. The various keys and icons provided in the utility extracts the coordinates of distortion free reference image and the image with geometric distortion. The error between these two corresponding points gives the measure of distortion and also used to evaluate the correction parameters for image distortion. As the GUI interface minimizes human interference in the process of geometric correction, its execution just requires use of icons and keys provided in the utility; this technique gives swift and accurate results as compared to other conventional methods for the measurement of the X/Y coordinates of an image.

  12. The Hierarchical Data Format as a Foundation for Community Data Sharing

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2017-12-01

    Hierarchical Data Format (HDF) formats and libraries have been used by individual researchers and major science programs across many Earth and Space Science disciplines and sectors to provide high-performance information storage and access for several decades. Generic group, dataset, and attribute objects in HDF have been combined in many ways to form domain objects that scientists understand and use. Well-known applications of HDF in the Earth Sciences include thousands of global satellite observations and products produced by NASA's Earth Observing System using the HDF-EOS conventions, navigation quality bathymetry produced as Bathymetric Attributed Grids (BAGs) by the OpenNavigationSurface project and others, seismic wave collections written into the Adoptable Seismic Data Format (ASDF) and many oceanographic and atmospheric products produced using the climate-forecast conventions with the netCDF4 data model and API to HDF5. This is the modus operandi of these communities: 1) develop a model of scientific data objects and associated metadata used in a domain, 2) implement that model using HDF, 3) develop software libraries that connect that model to tools and 4) encourage adoption of those tools in the community. Understanding these domain object implementations and facilitating communication across communities is an important goal of The HDF Group. We will discuss these examples and approaches to community outreach during this session.

  13. Software Safety Progress in NASA

    NASA Technical Reports Server (NTRS)

    Radley, Charles F.

    1995-01-01

    NASA has developed guidelines for development and analysis of safety-critical software. These guidelines have been documented in a Guidebook for Safety Critical Software Development and Analysis. The guidelines represent a practical 'how to' approach, to assist software developers and safety analysts in cost effective methods for software safety. They provide guidance in the implementation of the recent NASA Software Safety Standard NSS-1740.13 which was released as 'Interim' version in June 1994, scheduled for formal adoption late 1995. This paper is a survey of the methods in general use, resulting in the NASA guidelines for safety critical software development and analysis.

  14. Design and Analysis of Drive Shaft using Kevlar/Epoxy and Glass/Epoxy as a Composite Material

    NASA Astrophysics Data System (ADS)

    Karthikeyan, P.; Gobinath, R.; Kumar, L. Ajith; Jenish, D. Xavier

    2017-05-01

    In automobile industry drive shaft is one of the most important components to transmit power form the engine to rear wheel through the differential gear. Generally steel drive shaft is used in automobile industry, nowadays they are more interested to replace steel drive shaft with that of composite drive shaft. The overall objective of this paper is to analyze the composite drive shaft using to find out the best replacement for conventional steel drive shaft. The uses of advanced composite materials such as Kevlar, Graphite, Carbon and Glass with proper resins ware resulted in remarkable achievements in automobile industry because of its greater specific strength and specific modulus, improved fatigue and corrosion resistances and reduction in energy requirements due to reduction in weight as compared to steel shaft. This paper is to presents, the modeling and analysis of drive shaft using Kevlar/Epoxy and Glass/Epoxy as a composite material and to find best replacement for conventional steel drive shafts with an Kevlar/epoxy or Glass/Epoxy resin composite drive shaft. Modeling is done using CATIA software and Analysis is carried out by using ANSYS 10.0 software for easy understanding. The composite drive shaft reduces the weight by 81.67 % for Kevlar/Epoxy and 72.66% for Glass/Epoxy when compared with conventional steel drive shaft.

  15. Enhanced development of a catalyst chamber for the decomposition of up to 1.0 kg/s hydrogen peroxide

    NASA Astrophysics Data System (ADS)

    Božić, Ognjan; Porrmann, Dennis; Lancelle, Daniel; May, Stefan

    2016-06-01

    A new innovative hybrid rocket engine concept is developed within the AHRES program of the German Aerospace Center (DLR). This rocket engine based on hydroxyl-terminated polybutadiene (HTPB) with metallic additives as solid fuel and high test peroxide (HTP) as liquid oxidizer. Instead of a conventional ignition system, a catalyst chamber with a silver mesh catalyst is designed to decompose the HTP. The newly modified catalyst chamber is able to decompose up to 1.0 kg/s of 87.5 wt% HTP. Used as a monopropellant thruster, this equals an average thrust of 1600 N. The catalyst chamber is designed using the self-developed software tool SHAKIRA. The applied kinetic law, which determines catalytic decomposition of HTP within the catalyst chamber, is given and commented. Several calculations are carried out to determine the appropriate geometry for complete decomposition with a minimum of catalyst material. A number of tests under steady state conditions are carried out, using 87.5 wt% HTP with different flow rates and a constant amount of catalyst material. To verify the decomposition, the temperature is measured and compared with the theoretical prediction. The experimental results show good agreement with the results generated by the design tool. The developed catalyst chamber provides a simple, reliable ignition system for hybrid rocket propulsion systems based on hydrogen peroxide as oxidizer. This system is capable for multiple reignition. The developed hardware and software can be used to design full scale monopropellant thrusters based on HTP and catalyst chambers for hybrid rocket engines.

  16. Modelling in conventional electroporation for model cell with organelles using COMSOL Multiphysics

    NASA Astrophysics Data System (ADS)

    Sulaeman, M. Y.; Widita, R.

    2016-03-01

    Conventional electroporation is a formation of pores in the membrane cell due to the external electric field applied to the cell. The purpose of creating pores in the cell using conventional electroporation are to increase the effectiveness of chemotherapy (electrochemotherapy) and to kill cancer tissue using irreversible electroporation. Modeling of electroporation phenomenon on a model cell had been done by using software COMSOL Multiphysics 4.3b with the applied external electric field with intensity at 1.1 kV/cm to find transmembrane voltage and pore density. It can be concluded from the results of potential distribution and transmembrane voltage, it show that pores formation only occurs in the membrane cells and it could not penetrate into inside the model cell so there is not pores formation in its organells.

  17. User systems guidelines for software projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrahamson, L.

    1986-04-01

    This manual presents guidelines for software standards which were developed so that software project-development teams and management involved in approving the software could have a generalized view of all phases in the software production procedure and the steps involved in completing each phase. Guidelines are presented for six phases of software development: project definition, building a user interface, designing software, writing code, testing code, and preparing software documentation. The discussions for each phase include examples illustrating the recommended guidelines. 45 refs. (DWL)

  18. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  19. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  20. Analysis on the relationship between economic development and water environment pollution in Shandong province

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Zhengxian

    2018-02-01

    With the continuous development of the economy of the Shandong Province watershed, a large number of pollutant emission, resulting in water quality of the basin has undergone significant changes. To study the Shandong Province watershed economic development and the relationship between the discharge of pollutants, in this paper, the relationship between economic growth and pollutant emissions in the Shandong Province watershed was established by Shandong Province watershed in 2002-2015 per capita GDP and wastewater, COD, ammonia nitrogen(AN) pollutant emissions. The data were analyzed by software such as SPSS, and the cubic equation model between various pollutants and economic indexes was fitted. To further make the relationship between pollutants and economic development map to study the conventional pollutant emissions and economic development trends. It is found that only the relationship between industrial wastewater discharge and per capita GDP is most coordinated, that is, industrial wastewater emissions with the continuous development of the basin economy, showing a tendency to rise first and then fall. Finally, ultimately based on the results of the study of the water environment and economic development proposals were proposed.

  1. A measurement system for large, complex software programs

    NASA Technical Reports Server (NTRS)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  2. The dynamics of software development project management: An integrative systems dynamic perspective

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.; Abdel-Hamid, T.

    1984-01-01

    Rather than continuing to focus on software development projects per se, the system dynamics modeling approach outlined is extended to investigate a broader set of issues pertaining to the software development organization. Rather than trace the life cycle(s) of one or more software projects, the focus is on the operations of a software development department as a continuous stream of software products are developed, placed into operation, and maintained. A number of research questions are ""ripe'' for investigating including: (1) the efficacy of different organizational structures in different software development environments, (2) personnel turnover, (3) impact of management approaches such as management by objectives, and (4) the organizational/environmental determinants of productivity.

  3. Computer simulations and models for the performance characteristics of spectrally equivalent X-ray beams in medical diagnostic radiology

    PubMed Central

    Okunade, Akintunde A.

    2007-01-01

    In order to achieve uniformity in radiological imaging, it is recommended that the concept of equivalence in shape (quality) and size (quantity) of clinical Xray beams should be used for carrying out the comparative evaluation of image and patient dose. When used under the same irradiation geometry, X-ray beams that are strictly or relatively equivalent in terms of shape and size will produce identical or relatively identical image quality and patient dose. Simple mathematical models and software program EQSPECT.FOR were developed for the comparative evaluation of the performance characteristics in terms of contrast (C), contrast to noise ratio (CNR) and figure-of-merit (FOM = CNR2/DOSE) for spectrally equivalent beams transmitted through filter materials referred to as conventional and k-edged. At the same value of operating potential (kVp), results show that spectrally equivalent beam transmitted through conventional filter with higher atomic number (Z-value) in comparison with that transmitted through conventional filter with lower Z-value resulted in the same value of C and FOM. However, in comparison with the spectrally equivalent beam transmitted through filter of lower Z-value, the beam through filter of higher Z-value produced higher value of CNR and DOSE at equal tube loading (mAs) and kVp. Under the condition of equivalence of spectrum, at scaled (or reduced) tube loading and same kVp, filter materials of higher Z-value can produce the same values of C, CNR, DOSE and FOM as filter materials of lower Z-value. Unlike the case of comparison of spectrally equivalent beam transmitted through one conventional filter and that through another conventional filter, it is not possible to derive simple mathematical formulations for the relative performance of spectrally equivalent beam transmitted through a given conventional filter material and that through kedge filter material. PMID:21224928

  4. Developing sustainable software solutions for bioinformatics by the “ Butterfly” paradigm

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2014-01-01

    Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“ Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions. PMID:25383181

  5. What Is An Expert System? ERIC Digest.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    This digest describes and defines the various components of an expert system, e.g., a computerized tool designed to enhance the quality and availability of knowledge required by decision makers. It is noted that expert systems differ from conventional applications software in the following areas: (1) the existence of the expert systems shell, or…

  6. Unconventional Wisdom about Buying Technology

    ERIC Educational Resources Information Center

    Sullivan, Michael F.

    2004-01-01

    Conventional wisdom says that people should not buy anything in education until research is seen. The following questions should be asked: (1) Does that particular technology enhance learning? (2) Does that piece of software increase test scores? and (3) Do those machines reduce absenteeism? Of course the answer is always yes. No vendor is going…

  7. Media-Rich Learning through Universal Computing and Wireless Thin Client.

    ERIC Educational Resources Information Center

    Cain, Mark

    There is a whole infrastructure that must support a successful universal student computing requirement. Conventional approaches would have the students plugging into a wired network. That would necessitate network wires and hubs or many ports in each classroom. Students would need access to course-related software. Because the needs would change…

  8. Supporting Students in C++ Programming Courses with Automatic Program Style Assessment

    ERIC Educational Resources Information Center

    Ala-Mutka, Kirsti; Uimonen, Toni; Jarvinen, Hannu-Matti

    2004-01-01

    Professional programmers need common coding conventions to assure co-operation and a degree of quality of the software. Novice programmers, however, easily forget issues of programming style in their programming coursework. In particular with large classes, students may pass several courses without learning elements of programming style. This is…

  9. Microcomputer software development facilities

    NASA Technical Reports Server (NTRS)

    Gorman, J. S.; Mathiasen, C.

    1980-01-01

    A more efficient and cost effective method for developing microcomputer software is to utilize a host computer with high-speed peripheral support. Application programs such as cross assemblers, loaders, and simulators are implemented in the host computer for each of the microcomputers for which software development is a requirement. The host computer is configured to operate in a time share mode for multiusers. The remote terminals, printers, and down loading capabilities provided are based on user requirements. With this configuration a user, either local or remote, can use the host computer for microcomputer software development. Once the software is developed (through the code and modular debug stage) it can be downloaded to the development system or emulator in a test area where hardware/software integration functions can proceed. The microcomputer software program sources reside in the host computer and can be edited, assembled, loaded, and then downloaded as required until the software development project has been completed.

  10. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  11. Head-to-Head Comparison of Global Longitudinal Strain Measurements among Nine Different Vendors: The EACVI/ASE Inter-Vendor Comparison Study.

    PubMed

    Farsalinos, Konstantinos E; Daraban, Ana M; Ünlü, Serkan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe

    2015-10-01

    This study was planned by the EACVI/ASE/Industry Task Force to Standardize Deformation Imaging to (1) test the variability of speckle-tracking global longitudinal strain (GLS) measurements among different vendors and (2) compare GLS measurement variability with conventional echocardiographic parameters. Sixty-two volunteers were studied using ultrasound systems from seven manufacturers. Each volunteer was examined by the same sonographer on all machines. Inter- and intraobserver variability was determined in a true test-retest setting. Conventional echocardiographic parameters were acquired for comparison. Using the software packages of the respective manufacturer and of two software-only vendors, endocardial GLS was measured because it was the only GLS parameter that could be provided by all manufactures. We compared GLSAV (the average from the three apical views) and GLS4CH (measured in the four-chamber view) measurements among vendors and with the conventional echocardiographic parameters. Absolute values of GLSAV ranged from 18.0% to 21.5%, while GLS4CH ranged from 17.9% to 21.4%. The absolute difference between vendors for GLSAV was up to 3.7% strain units (P < .001). The interobserver relative mean errors were 5.4% to 8.6% for GLSAV and 6.2% to 11.0% for GLS4CH, while the intraobserver relative mean errors were 4.9% to 7.3% and 7.2% to 11.3%, respectively. These errors were lower than for left ventricular ejection fraction and most other conventional echocardiographic parameters. Reproducibility of GLS measurements was good and in many cases superior to conventional echocardiographic measurements. The small but statistically significant variation among vendors should be considered in performing serial studies and reflects a reference point for ongoing standardization efforts. Copyright © 2015 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  12. OnlineTED.com − a novel web-based audience response system for higher education. A pilot study to evaluate user acceptance

    PubMed Central

    Kühbeck, Felizian; Engelhardt, Stefan; Sarikas, Antonio

    2014-01-01

    Background and aim: Audience response (AR) systems are increasingly used in undergraduate medical education. However, high costs and complexity of conventional AR systems often limit their use. Here we present a novel AR system that is platform independent and does not require hardware clickers or additional software to be installed. Methods and results: “OnlineTED” was developed at Technische Universität München (TUM) based on Hypertext Preprocessor (PHP) with a My Structured Query Language (MySQL)-database as server- and Javascript as client-side programming languages. “OnlineTED” enables lecturers to create and manage question sets online and start polls in-class via a web-browser. Students can participate in the polls with any internet-enabled device (smartphones, tablet-PCs or laptops). A paper-based survey was conducted with undergraduate medical students and lecturers at TUM to compare "OnlineTED" with conventional AR systems using clickers. "OnlineTED" received above-average evaluation results by both students and lecturers at TUM and was seen on par or superior to conventional AR systems. The survey results indicated that up to 80% of students at TUM own an internet-enabled device (smartphone or tablet-PC) for participation in web-based AR technologies. Summary and Conclusion: “OnlineTED” is a novel web-based and platform-independent AR system for higher education that was well received by students and lecturers. As a non-commercial alternative to conventional AR systems it may foster interactive teaching in undergraduate education, in particular with large audiences. PMID:24575156

  13. Four simple recommendations to encourage best practices in research software

    PubMed Central

    Jiménez, Rafael C.; Kuzak, Mateusz; Alhamdoosh, Monther; Barker, Michelle; Batut, Bérénice; Borg, Mikael; Capella-Gutierrez, Salvador; Chue Hong, Neil; Cook, Martin; Corpas, Manuel; Flannery, Madison; Garcia, Leyla; Gelpí, Josep Ll.; Gladman, Simon; Goble, Carole; González Ferreiro, Montserrat; Gonzalez-Beltran, Alejandra; Griffin, Philippa C.; Grüning, Björn; Hagberg, Jonas; Holub, Petr; Hooft, Rob; Ison, Jon; Katz, Daniel S.; Leskošek, Brane; López Gómez, Federico; Oliveira, Luis J.; Mellor, David; Mosbergen, Rowland; Mulder, Nicola; Perez-Riverol, Yasset; Pergl, Robert; Pichler, Horst; Pope, Bernard; Sanz, Ferran; Schneider, Maria V.; Stodden, Victoria; Suchecki, Radosław; Svobodová Vařeková, Radka; Talvik, Harry-Anton; Todorov, Ilian; Treloar, Andrew; Tyagi, Sonika; van Gompel, Maarten; Vaughan, Daniel; Via, Allegra; Wang, Xiaochuan; Watson-Haigh, Nathan S.; Crouch, Steve

    2017-01-01

    Scientific research relies on computer software, yet software is not always developed following practices that ensure its quality and sustainability. This manuscript does not aim to propose new software development best practices, but rather to provide simple recommendations that encourage the adoption of existing best practices. Software development best practices promote better quality software, and better quality software improves the reproducibility and reusability of research. These recommendations are designed around Open Source values, and provide practical suggestions that contribute to making research software and its source code more discoverable, reusable and transparent. This manuscript is aimed at developers, but also at organisations, projects, journals and funders that can increase the quality and sustainability of research software by encouraging the adoption of these recommendations. PMID:28751965

  14. Four simple recommendations to encourage best practices in research software.

    PubMed

    Jiménez, Rafael C; Kuzak, Mateusz; Alhamdoosh, Monther; Barker, Michelle; Batut, Bérénice; Borg, Mikael; Capella-Gutierrez, Salvador; Chue Hong, Neil; Cook, Martin; Corpas, Manuel; Flannery, Madison; Garcia, Leyla; Gelpí, Josep Ll; Gladman, Simon; Goble, Carole; González Ferreiro, Montserrat; Gonzalez-Beltran, Alejandra; Griffin, Philippa C; Grüning, Björn; Hagberg, Jonas; Holub, Petr; Hooft, Rob; Ison, Jon; Katz, Daniel S; Leskošek, Brane; López Gómez, Federico; Oliveira, Luis J; Mellor, David; Mosbergen, Rowland; Mulder, Nicola; Perez-Riverol, Yasset; Pergl, Robert; Pichler, Horst; Pope, Bernard; Sanz, Ferran; Schneider, Maria V; Stodden, Victoria; Suchecki, Radosław; Svobodová Vařeková, Radka; Talvik, Harry-Anton; Todorov, Ilian; Treloar, Andrew; Tyagi, Sonika; van Gompel, Maarten; Vaughan, Daniel; Via, Allegra; Wang, Xiaochuan; Watson-Haigh, Nathan S; Crouch, Steve

    2017-01-01

    Scientific research relies on computer software, yet software is not always developed following practices that ensure its quality and sustainability. This manuscript does not aim to propose new software development best practices, but rather to provide simple recommendations that encourage the adoption of existing best practices. Software development best practices promote better quality software, and better quality software improves the reproducibility and reusability of research. These recommendations are designed around Open Source values, and provide practical suggestions that contribute to making research software and its source code more discoverable, reusable and transparent. This manuscript is aimed at developers, but also at organisations, projects, journals and funders that can increase the quality and sustainability of research software by encouraging the adoption of these recommendations.

  15. A Legal Guide for the Software Developer.

    ERIC Educational Resources Information Center

    Minnesota Small Business Assistance Office, St. Paul.

    This booklet has been prepared to familiarize the inventor, creator, or developer of a new computer software product or software invention with the basic legal issues involved in developing, protecting, and distributing the software in the United States. Basic types of software protection and related legal matters are discussed in detail,…

  16. Understanding Acceptance of Software Metrics--A Developer Perspective

    ERIC Educational Resources Information Center

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  17. Process Analyzer

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Under a NASA Small Business Innovation Research (SBIR) contract, Axiomatics Corporation developed a shunting Dielectric Sensor to determine the nutrient level and analyze plant nutrient solutions in the CELSS, NASA's space life support program. (CELSS is an experimental facility investigating closed-cycle plant growth and food processing for long duration manned missions.) The DiComp system incorporates a shunt electrode and is especially sensitive to changes in dielectric property changes in materials at measurements much lower than conventional sensors. The analyzer has exceptional capabilities for predicting composition of liquid streams or reactions. It measures concentrations and solids content up to 100 percent in applications like agricultural products, petrochemicals, food and beverages. The sensor is easily installed; maintenance is low, and it can be calibrated on line. The software automates data collection and analysis.

  18. Nanopositioning for polarimetric characterization.

    PubMed

    Qureshi, Naser; Kolokoltsev, Oleg V; Ortega-Martínez, Roberto; Ordoñez-Romero, C L

    2008-12-01

    A positioning system with approximately nanometer resolution has been developed based on a new implementation of a motor-driven screw scheme. In contrast to conventional positioning systems based on piezoelectric elements, this system shows remarkably low levels of drift and vibration, and eliminates the need for position feedback during typical data acquisition processes. During positioning or scanning processes, non-repeatability and hysteresis problems inherent in mechanical positioning systems are greatly reduced using a software feedback scheme. As a result, we are able to demonstrate an average mechanical resolution of 1.45 nm and near diffraction-limited imaging using scanning optical microscopy. We propose this approach to nanopositioning as a readily accessible alternative enabling high spatial resolution scanning probe characterization (e.g., polarimetry) and provide practical details for its implementation.

  19. Review on Microstructure Analysis of Metals and Alloys Using Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Rekha, Suganthini; Bupesh Raja, V. K.

    2017-05-01

    The metals and alloys find vast application in engineering and domestic sectors. The mechanical properties of the metals and alloys are influenced by their microstructure. Hence the microstructural investigation is very critical. Traditionally the microstructure is studied using optical microscope with suitable metallurgical preparation. The past few decades the computers are applied in the capture and analysis of the optical micrographs. The advent of computer softwares like digital image processing and computer vision technologies are a boon to the analysis of the microstructure. In this paper the literature study of the various developments in the microstructural analysis, is done. The conventional optical microscope is complemented by the use of Scanning Electron Microscope (SEM) and other high end equipments.

  20. An automatic analyzer of solid state nuclear track detectors using an optic RAM as image sensor

    NASA Astrophysics Data System (ADS)

    Staderini, Enrico Maria; Castellano, Alfredo

    1986-02-01

    An optic RAM is a conventional digital random access read/write dynamic memory device featuring a quartz windowed package and memory cells regularly ordered on the chip. Such a device is used as an image sensor because each cell retains data stored in it for a time depending on the intensity of the light incident on the cell itself. The authors have developed a system which uses an optic RAM to acquire and digitize images from electrochemically etched CR39 solid state nuclear track detectors (SSNTD) in the track count rate up to 5000 cm -2. On the digital image so obtained, a microprocessor, with appropriate software, performs image analysis, filtering, tracks counting and evaluation.

  1. Current Practice in Software Development for Computational Neuroscience and How to Improve It

    PubMed Central

    Gewaltig, Marc-Oliver; Cannon, Robert

    2014-01-01

    Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers, and scientists to assess the quality of software and whether particular pieces of software are ready for use in research. PMID:24465191

  2. Current practice in software development for computational neuroscience and how to improve it.

    PubMed

    Gewaltig, Marc-Oliver; Cannon, Robert

    2014-01-01

    Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers, and scientists to assess the quality of software and whether particular pieces of software are ready for use in research.

  3. Impeller leakage flow modeling for mechanical vibration control

    NASA Technical Reports Server (NTRS)

    Palazzolo, Alan B.

    1996-01-01

    HPOTP and HPFTP vibration test results have exhibited transient and steady characteristics which may be due to impeller leakage path (ILP) related forces. For example, an axial shift in the rotor could suddenly change the ILP clearances and lengths yielding dynamic coefficient and subsequent vibration changes. ILP models are more complicated than conventional-single component-annular seal models due to their radial flow component (coriolis and centrifugal acceleration), complex geometry (axial/radial clearance coupling), internal boundary (transition) flow conditions between mechanical components along the ILP and longer length, requiring moment as well as force coefficients. Flow coupling between mechanical components results from mass and energy conservation applied at their interfaces. Typical components along the ILP include an inlet seal, curved shroud, and an exit seal, which may be a stepped labyrinth type. Von Pragenau (MSFC) has modeled labyrinth seals as a series of plain annular seals for leakage and dynamic coefficient prediction. These multi-tooth components increase the total number of 'flow coupled' components in the ILP. Childs developed an analysis for an ILP consisting of a single, constant clearance shroud with an exit seal represented by a lumped flow-loss coefficient. This same geometry was later extended to include compressible flow. The objective of the current work is to: supply ILP leakage-force impedance-dynamic coefficient modeling software to MSFC engineers, base on incompressible/compressible bulk flow theory; design the software to model a generic geometry ILP described by a series of components lying along an arbitrarily directed path; validate the software by comparison to available test data, CFD and bulk models; and develop a hybrid CFD-bulk flow model of an ILP to improve modeling accuracy within practical run time constraints.

  4. Use of a secure Internet Web site for collaborative medical research.

    PubMed

    Marshall, W W; Haley, R W

    2000-10-11

    Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.

  5. [Instruments in Brazilian Sign Language for assessing the quality of life of the deaf population].

    PubMed

    Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida

    2013-06-01

    To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.

  6. The growing need for microservices in bioinformatics.

    PubMed

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting.

  7. The growing need for microservices in bioinformatics

    PubMed Central

    Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.

    2016-01-01

    Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting. PMID:27994937

  8. Usability and Interoperability Improvements for an EASE-Grid 2.0 Passive Microwave Data Product Using CF Conventions

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.

    2017-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Historical versions of the gridded passive microwave data sets were produced as flat binary files described in human-readable documentation. This format is error-prone and makes it difficult to reliably include all processing and provenance. Funded by NASA MEaSUREs, we have completely reprocessed the gridded data record that includes SMMR, SSM/I-SSMIS and AMSR-E. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) files are self-describing. Our approach to the new data set was to create netCDF4 files that use standard metadata conventions and best practices to incorporate file-level, machine- and human-readable contents, geolocation, processing and provenance metadata. We followed the flexible and adaptable Climate and Forecast (CF-1.6) Conventions with respect to their coordinate conventions and map projection parameters. Additionally, we made use of Attribute Conventions for Dataset Discovery (ACDD-1.3) that provided file-level conventions with spatio-temporal bounds that enable indexing software to search for coverage. Our CETB files also include temporal coverage and spatial resolution in the file-level metadata for human-readability. We made use of the JPL CF/ACDD Compliance Checker to guide this work. We tested our file format with real software, for example, netCDF Command-line Operators (NCO) power tools for unlimited control on spatio-temporal subsetting and concatenation of files. The GDAL tools understand the CF metadata and produce fully-compliant geotiff files from our data. ArcMap can then reproject the geotiff files on-the-fly and work with other geolocated data such as coastlines, with no special work required. We expect this combination of standards and well-tested interoperability to significantly improve the usability of this important ESDR for the Earth Science community.

  9. Two-dimensional Shear Wave Elastography on Conventional Ultrasound Scanners with Time Aligned Sequential Tracking (TAST) and Comb-push Ultrasound Shear Elastography (CUSE)

    PubMed Central

    Song, Pengfei; Macdonald, Michael C.; Behler, Russell H.; Lanning, Justin D.; Wang, Michael H.; Urban, Matthew W.; Manduca, Armando; Zhao, Heng; Callstrom, Matthew R.; Alizad, Azra; Greenleaf, James F.; Chen, Shigao

    2014-01-01

    Two-dimensional (2D) shear wave elastography presents 2D quantitative shear elasticity maps of tissue, which are clinically useful for both focal lesion detection and diffuse disease diagnosis. Realization of 2D shear wave elastography on conventional ultrasound scanners, however, is challenging due to the low tracking pulse-repetition-frequency (PRF) of these systems. While some clinical and research platforms support software beamforming and plane wave imaging with high PRF, the majority of current clinical ultrasound systems do not have the software beamforming capability, which presents a critical challenge for translating the 2D shear wave elastography technique from laboratory to clinical scanners. To address this challenge, this paper presents a Time Aligned Sequential Tracking (TAST) method for shear wave tracking on conventional ultrasound scanners. TAST takes advantage of the parallel beamforming capability of conventional systems and realizes high PRF shear wave tracking by sequentially firing tracking vectors and aligning shear wave data in the temporal direction. The Comb-push Ultrasound Shear Elastography (CUSE) technique was used to simultaneously produce multiple shear wave sources within the field-of-view (FOV) to enhance shear wave signal-to-noise-ratio (SNR) and facilitate robust reconstructions of 2D elasticity maps. TAST and CUSE were realized on a conventional ultrasound scanner (the General Electric LOGIQ E9). A phantom study showed that the shear wave speed measurements from the LOGIQ E9 were in good agreement to the values measured from other 2D shear wave imaging technologies. An inclusion phantom study showed that the LOGIQ E9 had comparable performance to the Aixplorer (Supersonic Imagine) in terms of bias and precision in measuring different sized inclusions. Finally, in vivo case analysis of a breast with a malignant mass, and a liver from a healthy subject demonstrated the feasibility of using the LOGIQ E9 for in vivo 2D shear wave elastography. These promising results indicate that the proposed technique can enable the implementation of 2D shear wave elastography on conventional ultrasound scanners and potentially facilitate wider clinical applications with shear wave elastography. PMID:25643079

  10. Team Software Development for Aerothermodynamic and Aerodynamic Analysis and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, N.; Atkins, H. L.; Bibb, K. L.; Biedron, R. T.; Carpenter, M. H.; Gnoffo, P. A.; Hammond, D. P.; Jones, W. T.; Kleb, W. L.; Lee-Rausch, E. M.

    2003-01-01

    A collaborative approach to software development is described. The approach employs the agile development techniques: project retrospectives, Scrum status meetings, and elements of Extreme Programming to efficiently develop a cohesive and extensible software suite. The software product under development is a fluid dynamics simulator for performing aerodynamic and aerothermodynamic analysis and design. The functionality of the software product is achieved both through the merging, with substantial rewrite, of separate legacy codes and the authorship of new routines. Examples of rapid implementation of new functionality demonstrate the benefits obtained with this agile software development process. The appendix contains a discussion of coding issues encountered while porting legacy Fortran 77 code to Fortran 95, software design principles, and a Fortran 95 coding standard.

  11. Results obtained with a low cost software-based audiometer for hearing screening.

    PubMed

    Ferrari, Deborah Viviane; Lopez, Esteban Alejandro; Lopes, Andrea Cintra; Aiello, Camila Piccini; Jokura, Pricila Reis

    2013-07-01

     The implementation of hearing screening programs can be facilitated by reducing operating costs, including the cost of equipment. The Telessaúde (TS) audiometer is a low-cost, software-based, and easy-to-use piece of equipment for conducting audiometric screening.  To evaluate the TS audiometer for conducting audiometric screening.  A prospective randomized study was performed. Sixty subjects, divided into those who did not have (group A, n = 30) and those who had otologic complaints (group B, n = 30), underwent audiometric screening with conventional and TS audiometers in a randomized order. Pure tones at 25 dB HL were presented at frequencies of 500, 1000, 2000, and 4000 Hz. A "fail" result was considered when the individual failed to respond to at least one of the stimuli. Pure-tone audiometry was also performed on all participants. The concordance of the results of screening with both audiometers was evaluated. The sensitivity, specificity, and positive and negative predictive values of screening with the TS audiometer were calculated.  For group A, 100% of the ears tested passed the screening. For group B, "pass" results were obtained in 34.2% (TS) and 38.3% (conventional) of the ears tested. The agreement between procedures (TS vs. conventional) ranged from 93% to 98%. For group B, screening with the TS audiometer showed 95.5% sensitivity, 90.4% sensitivity, and positive and negative predictive values equal to 94.9% and 91.5%, respectively.  The results of the TS audiometer were similar to those obtained with the conventional audiometer, indicating that the TS audiometer can be used for audiometric screening.

  12. In vivo evaluation of inter-operator reproducibility of digital dental and conventional impression techniques.

    PubMed

    Kamimura, Emi; Tanaka, Shinpei; Takaba, Masayuki; Tachi, Keita; Baba, Kazuyoshi

    2017-01-01

    The aim of this study was to evaluate and compare the inter-operator reproducibility of three-dimensional (3D) images of teeth captured by a digital impression technique to a conventional impression technique in vivo. Twelve participants with complete natural dentition were included in this study. A digital impression of the mandibular molars of these participants was made by two operators with different levels of clinical experience, 3 or 16 years, using an intra-oral scanner (Lava COS, 3M ESPE). A silicone impression also was made by the same operators using the double mix impression technique (Imprint3, 3M ESPE). Stereolithography (STL) data were directly exported from the Lava COS system, while STL data of a plaster model made from silicone impression were captured by a three-dimensional (3D) laboratory scanner (D810, 3shape). The STL datasets recorded by two different operators were compared using 3D evaluation software and superimposed using the best-fit-algorithm method (least-squares method, PolyWorks, InnovMetric Software) for each impression technique. Inter-operator reproducibility as evaluated by average discrepancies of corresponding 3D data was compared between the two techniques (Wilcoxon signed-rank test). The visual inspection of superimposed datasets revealed that discrepancies between repeated digital impression were smaller than observed with silicone impression. Confirmation was forthcoming from statistical analysis revealing significantly smaller average inter-operator reproducibility using a digital impression technique (0.014± 0.02 mm) than when using a conventional impression technique (0.023 ± 0.01 mm). The results of this in vivo study suggest that inter-operator reproducibility with a digital impression technique may be better than that of a conventional impression technique and is independent of the clinical experience of the operator.

  13. SAO mission support software and data standards, version 1.0

    NASA Technical Reports Server (NTRS)

    Hsieh, P.

    1993-01-01

    This document defines the software developed by the SAO AXAF Mission Support (MS) Program and defines standards for the software development process and control of data products generated by the software. The SAO MS is tasked to develop and use software to perform a variety of functions in support of the AXAF mission. Software is developed by software engineers and scientists, and commercial off-the-shelf (COTS) software is used either directly or customized through the use of scripts to implement analysis procedures. Software controls real-time laboratory instruments, performs data archiving, displays data, and generates model predictions. Much software is used in the analysis of data to generate data products that are required by the AXAF project, for example, on-orbit mirror performance predictions or detailed characterization of the mirror reflection performance with energy.

  14. Making automated computer program documentation a feature of total system design

    NASA Technical Reports Server (NTRS)

    Wolf, A. W.

    1970-01-01

    It is pointed out that in large-scale computer software systems, program documents are too often fraught with errors, out of date, poorly written, and sometimes nonexistent in whole or in part. The means are described by which many of these typical system documentation problems were overcome in a large and dynamic software project. A systems approach was employed which encompassed such items as: (1) configuration management; (2) standards and conventions; (3) collection of program information into central data banks; (4) interaction among executive, compiler, central data banks, and configuration management; and (5) automatic documentation. A complete description of the overall system is given.

  15. Binary-mask generation for diffractive optical elements using microcomputers.

    PubMed

    O'Shea, D C; Beletic, J W; Poutous, M

    1993-05-10

    A new technique for generation of binary masks for the fabrication of diffractive optical elements is investigated. This technique, which uses commercially available desktop-publishing hardware and software in conjunction with a standard photoreduction camera, is much faster and less expensive thanhe conventional methods. The short turnaround time and low cost should give researchers a much greater degree of flexibility in the field of binary optics and enable wider application of diffractive-optics technology. Techniques for generating optical elements by using standard software packages that produce PostScript output are described. An evaluation of the dimensional fidelity of the mask reproduction from design to its realization in photoresist is presented.

  16. Software for universal noiseless coding

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    An overview is provided of the universal noiseless coding algorithms as well as their relationship to the now available FORTRAN implementations. It is suggested that readers considering investigating the utility of these algorithms for actual applications should consult both NASA's Computer Software Management and Information Center (COSMIC) and descriptions of coding techniques provided by Rice (1979). Examples of applying these techniques have also been given by Rice (1975, 1979, 1980). Attention is given to reversible preprocessing, general implementation instructions, naming conventions, and calling arguments. A general applicability of the considered algorithms to solving practical problems is obtained because most real data sources can be simply transformed into the required form by appropriate preprocessing.

  17. Real-time determination of sarcomere length of a single cardiomyocyte during contraction

    PubMed Central

    Kalda, Mari; Vendelin, Marko

    2013-01-01

    Sarcomere length of a cardiomyocyte is an important control parameter for physiology studies on a single cell level; for instance, its accurate determination in real time is essential for performing single cardiomyocyte contraction experiments. The aim of this work is to develop an efficient and accurate method for estimating a mean sarcomere length of a contracting cardiomyocyte using microscopy images as an input. The novelty in developed method lies in 1) using unbiased measure of similarities to eliminate systematic errors from conventional autocorrelation function (ACF)-based methods when applied to region of interest of an image, 2) using a semianalytical, seminumerical approach for evaluating the similarity measure to take into account spatial dependence of neighboring image pixels, and 3) using a detrend algorithm to extract the sarcomere striation pattern content from the microscopy images. The developed sarcomere length estimation procedure has superior computational efficiency and estimation accuracy compared with the conventional ACF and spectral analysis-based methods using fast Fourier transform. As shown by analyzing synthetic images with the known periodicity, the estimates obtained by the developed method are more accurate at the subpixel level than ones obtained using ACF analysis. When applied in practice on rat cardiomyocytes, our method was found to be robust to the choice of the region of interest that may 1) include projections of carbon fibers and nucleus, 2) have uneven background, and 3) be slightly disoriented with respect to average direction of sarcomere striation pattern. The developed method is implemented in open-source software. PMID:23255581

  18. Accurately Diagnosing Uric Acid Stones from Conventional Computerized Tomography Imaging: Development and Preliminary Assessment of a Pixel Mapping Software.

    PubMed

    Ganesan, Vishnu; De, Shubha; Shkumat, Nicholas; Marchini, Giovanni; Monga, Manoj

    2018-02-01

    Preoperative determination of uric acid stones from computerized tomography imaging would be of tremendous clinical use. We sought to design a software algorithm that could apply data from noncontrast computerized tomography to predict the presence of uric acid stones. Patients with pure uric acid and calcium oxalate stones were identified from our stone registry. Only stones greater than 4 mm which were clearly traceable from initial computerized tomography to final composition were included in analysis. A semiautomated computer algorithm was used to process image data. Average and maximum HU, eccentricity (deviation from a circle) and kurtosis (peakedness vs flatness) were automatically generated. These parameters were examined in several mathematical models to predict the presence of uric acid stones. A total of 100 patients, of whom 52 had calcium oxalate and 48 had uric acid stones, were included in the final analysis. Uric acid stones were significantly larger (12.2 vs 9.0 mm, p = 0.03) but calcium oxalate stones had higher mean attenuation (457 vs 315 HU, p = 0.001) and maximum attenuation (918 vs 553 HU, p <0.001). Kurtosis was significantly higher in each axis for calcium oxalate stones (each p <0.001). A composite algorithm using attenuation distribution pattern, average attenuation and stone size had overall 89% sensitivity, 91% specificity, 91% positive predictive value and 89% negative predictive value to predict uric acid stones. A combination of stone size, attenuation intensity and attenuation pattern from conventional computerized tomography can distinguish uric acid stones from calcium oxalate stones with high sensitivity and specificity. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  19. Ejection fraction in myocardial perfusion imaging assessed with a dynamic phantom: comparison between IQ-SPECT and LEHR.

    PubMed

    Hippeläinen, Eero; Mäkelä, Teemu; Kaasalainen, Touko; Kaleva, Erna

    2017-12-01

    Developments in single photon emission tomography instrumentation and reconstruction methods present a potential for decreasing acquisition times. One of such recent options for myocardial perfusion imaging (MPI) is IQ-SPECT. This study was motivated by the inconsistency in the reported ejection fraction (EF) and left ventricular (LV) volume results between IQ-SPECT and more conventional low-energy high-resolution (LEHR) collimation protocols. IQ-SPECT and LEHR quantitative results were compared while the equivalent number of iterations (EI) was varied. The end-diastolic (EDV) and end-systolic volumes (ESV) and the derived EF values were investigated. A dynamic heart phantom was used to produce repeatable ESVs, EDVs and EFs. Phantom performance was verified by comparing the set EF values to those measured from a gated multi-slice X-ray computed tomography (CT) scan (EF True ). The phantom with an EF setting of 45, 55, 65 and 70% was imaged with both IQ-SPECT and LEHR protocols. The data were reconstructed with different EI, and two commonly used clinical myocardium delineation software were used to evaluate the LV volumes. The CT verification showed that the phantom EF settings were repeatable and accurate with the EF True being within 1% point from the manufacture's nominal value. Depending on EI both MPI protocols can be made to produce correct EF estimates, but IQ-SPECT protocol produced on average 41 and 42% smaller EDV and ESV when compared to the phantom's volumes, while LEHR protocol underestimated volumes by 24 and 21%, respectively. The volume results were largely similar between the delineation methods used. The reconstruction parameters can greatly affect the volume estimates obtained from perfusion studies. IQ-SPECT produces systematically smaller LV volumes than the conventional LEHR MPI protocol. The volume estimates are also software dependent.

  20. Software technology insertion: A study of success factors

    NASA Technical Reports Server (NTRS)

    Lydon, Tom

    1990-01-01

    Managing software development in large organizations has become increasingly difficult due to increasing technical complexity, stricter government standards, a shortage of experienced software engineers, competitive pressure for improved productivity and quality, the need to co-develop hardware and software together, and the rapid changes in both hardware and software technology. The 'software factory' approach to software development minimizes risks while maximizing productivity and quality through standardization, automation, and training. However, in practice, this approach is relatively inflexible when adopting new software technologies. The methods that a large multi-project software engineering organization can use to increase the likelihood of successful software technology insertion (STI), especially in a standardized engineering environment, are described.

Top