Sample records for test generation tools

  1. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  2. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    NASA Astrophysics Data System (ADS)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  3. Model-Driven Test Generation of Distributed Systems

    NASA Technical Reports Server (NTRS)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  4. Efficient Data Generation and Publication as a Test Tool

    NASA Technical Reports Server (NTRS)

    Einstein, Craig Jakob

    2017-01-01

    A tool to facilitate the generation and publication of test data was created to test the individual components of a command and control system designed to launch spacecraft. Specifically, this tool was built to ensure messages are properly passed between system components. The tool can also be used to test whether the appropriate groups have access (read/write privileges) to the correct messages. The messages passed between system components take the form of unique identifiers with associated values. These identifiers are alphanumeric strings that identify the type of message and the additional parameters that are contained within the message. The values that are passed with the message depend on the identifier. The data generation tool allows for the efficient creation and publication of these messages. A configuration file can be used to set the parameters of the tool and also specify which messages to pass.

  5. The Generation Rate of Respirable Dust from Cutting Fiber Cement Siding Using Different Tools

    PubMed Central

    Qi, Chaolong; Echt, Alan; Gressel, Michael G

    2017-01-01

    This article describes the evaluation of the generation rate of respirable dust (GAPS, defined as the mass of respirable dust generated per unit linear length cut) from cutting fiber cement siding using different tools in a laboratory testing system. We used an aerodynamic particle sizer spectrometer (APS) to continuously monitor the real-time size distributions of the dust throughout cutting tests when using a variety of tools, and calculated the generation rate of respirable dust for each testing condition using the size distribution data. The test result verifies that power shears provided an almost dust-free operation with a GAPS of 0.006 gram meter−1 (g m−1) at the testing condition. For the same power saws, the cuts using saw blades with more teeth generated more respirable dusts. Using the same blade for all four miter saws tested in this study, a positive linear correlation was found between the saws’ blade rotating speed and its dust generation rate. In addition, a circular saw running at the highest blade rotating speed of 9068 RPM generated the greatest amount of dust. All the miter saws generated less dust in the ‘chopping mode’ than in the ‘chopping and sliding’ mode. For the tested saws, GAPS consistently decreased with the increases of the saw cutting feed rate and the number of board in the stack. All the test results point out that fewer cutting interactions between the saw blade’s teeth and the siding board for a unit linear length of cut tend to result in a lower generation rate of respirable dust. These results may help guide optimal operation in practice and future tool development aimed at minimizing dust generation while producing a satisfactory cut. PMID:28395343

  6. The Generation Rate of Respirable Dust from Cutting Fiber Cement Siding Using Different Tools.

    PubMed

    Qi, Chaolong; Echt, Alan; Gressel, Michael G

    2017-03-01

    This article describes the evaluation of the generation rate of respirable dust (GAPS, defined as the mass of respirable dust generated per unit linear length cut) from cutting fiber cement siding using different tools in a laboratory testing system. We used an aerodynamic particle sizer spectrometer (APS) to continuously monitor the real-time size distributions of the dust throughout cutting tests when using a variety of tools, and calculated the generation rate of respirable dust for each testing condition using the size distribution data. The test result verifies that power shears provided an almost dust-free operation with a GAPS of 0.006 g m-1 at the testing condition. For the same power saws, the cuts using saw blades with more teeth generated more respirable dusts. Using the same blade for all four miter saws tested in this study, a positive linear correlation was found between the saws' blade rotating speed and its dust generation rate. In addition, a circular saw running at the highest blade rotating speed of 9068 rpm generated the greatest amount of dust. All the miter saws generated less dust in the 'chopping mode' than in the 'chopping and sliding' mode. For the tested saws, GAPS consistently decreased with the increases of the saw cutting feed rate and the number of board in the stack. All the test results point out that fewer cutting interactions between the saw blade's teeth and the siding board for a unit linear length of cut tend to result in a lower generation rate of respirable dust. These results may help guide optimal operation in practice and future tool development aimed at minimizing dust generation while producing a satisfactory cut. Published by Oxford University Press on behalf of The British Occupational Hygiene Society 2017.

  7. Analyzing Item Generation with Natural Language Processing Tools for the "TOEIC"® Listening Test. Research Report. ETS RR-17-52

    ERIC Educational Resources Information Center

    Yoon, Su-Youn; Lee, Chong Min; Houghton, Patrick; Lopez, Melissa; Sakano, Jennifer; Loukina, Anastasia; Krovetz, Bob; Lu, Chi; Madani, Nitin

    2017-01-01

    In this study, we developed assistive tools and resources to support TOEIC® Listening test item generation. There has recently been an increased need for a large pool of items for these tests. This need has, in turn, inspired efforts to increase the efficiency of item generation while maintaining the quality of the created items. We aimed to…

  8. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  9. An Ada programming support environment

    NASA Technical Reports Server (NTRS)

    Tyrrill, AL; Chan, A. David

    1986-01-01

    The toolset of an Ada Programming Support Environment (APSE) being developed at North American Aircraft Operations (NAAO) of Rockwell International, is described. The APSE is resident on three different hosts and must support developments for the hosts and for embedded targets. Tools and developed software must be freely portable between the hosts. The toolset includes the usual editors, compilers, linkers, debuggers, configuration magnagers, and documentation tools. Generally, these are being supplied by the host computer vendors. Other tools, for example, pretty printer, cross referencer, compilation order tool, and management tools were obtained from public-domain sources, are implemented in Ada and are being ported to the hosts. Several tools being implemented in-house are of interest, these include an Ada Design Language processor based on compilable Ada. A Standalone Test Environment Generator facilitates test tool construction and partially automates unit level testing. A Code Auditor/Static Analyzer permits the Ada programs to be evaluated against measures of quality. An Ada Comment Box Generator partially automates generation of header comment boxes.

  10. Harnessing scientific literature reports for pharmacovigilance. Prototype software analytical tool development and usability testing.

    PubMed

    Sorbello, Alfred; Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier

    2017-03-22

    We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers' capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. All usability test participants cited the tool's ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool's automated literature search relative to a manual 'all fields' PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction.

  11. Test Generator for MATLAB Simulations

    NASA Technical Reports Server (NTRS)

    Henry, Joel

    2011-01-01

    MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.

  12. Unit Testing for Command and Control Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Joshua

    2018-01-01

    Unit tests were created to evaluate the functionality of a Data Generation and Publication tool for a command and control system. These unit tests are developed to constantly evaluate the tool and ensure it functions properly as the command and control system grows in size and scope. Unit tests are a crucial part of testing any software project and are especially instrumental in the development of a command and control system. They save resources, time and costs associated with testing, and catch issues before they become increasingly difficult and costly. The unit tests produced for the Data Generation and Publication tool to be used in a command and control system assure the users and stakeholders of its functionality and offer assurances which are vital in the launching of spacecraft safely.

  13. Digital test assembly of truck parts with the IMMA-tool--an illustrative case.

    PubMed

    Hanson, L; Högberg, D; Söderholm, M

    2012-01-01

    Several digital human modelling (DHM) tools have been developed for simulation and visualisation of human postures and motions. In 2010 the DHM tool IMMA (Intelligently Moving Manikins) was introduced as a DHM tool that uses advanced path planning techniques to generate collision free and biomechanically acceptable motions for digital human models (as well as parts) in complex assembly situations. The aim of the paper is to illustrate how the IPS/IMMA tool is used at Scania CV AB in a digital test assembly process, and to compare the tool with other DHM tools on the market. The illustrated case of using the IMMA tool, here combined with the path planner tool IPS, indicates that the tool is promising. The major strengths of the tool are its user friendly interface, the motion generation algorithms, the batch simulation of manikins and the ergonomics assessment methods that consider time.

  14. A digital flight control system verification laboratory

    NASA Technical Reports Server (NTRS)

    De Feo, P.; Saib, S.

    1982-01-01

    A NASA/FAA program has been established for the verification and validation of digital flight control systems (DFCS), with the primary objective being the development and analysis of automated verification tools. In order to enhance the capabilities, effectiveness, and ease of using the test environment, software verification tools can be applied. Tool design includes a static analyzer, an assertion generator, a symbolic executor, a dynamic analysis instrument, and an automated documentation generator. Static and dynamic tools are integrated with error detection capabilities, resulting in a facility which analyzes a representative testbed of DFCS software. Future investigations will ensue particularly in the areas of increase in the number of software test tools, and a cost effectiveness assessment.

  15. Two New Tools for Glycopeptide Analysis Researchers: A Glycopeptide Decoy Generator and a Large Data Set of Assigned CID Spectra of Glycopeptides.

    PubMed

    Lakbub, Jude C; Su, Xiaomeng; Zhu, Zhikai; Patabandige, Milani W; Hua, David; Go, Eden P; Desaire, Heather

    2017-08-04

    The glycopeptide analysis field is tightly constrained by a lack of effective tools that translate mass spectrometry data into meaningful chemical information, and perhaps the most challenging aspect of building effective glycopeptide analysis software is designing an accurate scoring algorithm for MS/MS data. We provide the glycoproteomics community with two tools to address this challenge. The first tool, a curated set of 100 expert-assigned CID spectra of glycopeptides, contains a diverse set of spectra from a variety of glycan types; the second tool, Glycopeptide Decoy Generator, is a new software application that generates glycopeptide decoys de novo. We developed these tools so that emerging methods of assigning glycopeptides' CID spectra could be rigorously tested. Software developers or those interested in developing skills in expert (manual) analysis can use these tools to facilitate their work. We demonstrate the tools' utility in assessing the quality of one particular glycopeptide software package, GlycoPep Grader, which assigns glycopeptides to CID spectra. We first acquired the set of 100 expert assigned CID spectra; then, we used the Decoy Generator (described herein) to generate 20 decoys per target glycopeptide. The assigned spectra and decoys were used to test the accuracy of GlycoPep Grader's scoring algorithm; new strengths and weaknesses were identified in the algorithm using this approach. Both newly developed tools are freely available. The software can be downloaded at http://glycopro.chem.ku.edu/GPJ.jar.

  16. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  17. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    ERIC Educational Resources Information Center

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  18. Test Generators: Teacher's Tool or Teacher's Headache?

    ERIC Educational Resources Information Center

    Eiser, Leslie

    1988-01-01

    Discusses the advantages and disadvantages of test generation programs. Includes setting up, printing exams and "bells and whistles." Reviews eight computer packages for Apple and IBM personal computers. Compares features, costs, and usage. (CW)

  19. "Think aloud" and "Near live" usability testing of two complex clinical decision support tools.

    PubMed

    Richardson, Safiya; Mishuris, Rebecca; O'Connell, Alexander; Feldstein, David; Hess, Rachel; Smith, Paul; McCullagh, Lauren; McGinn, Thomas; Mann, Devin

    2017-10-01

    Low provider adoption continues to be a significant barrier to realizing the potential of clinical decision support. "Think Aloud" and "Near Live" usability testing were conducted on two clinical decision support tools. Each was composed of an alert, a clinical prediction rule which estimated risk of either group A Streptococcus pharyngitis or pneumonia and an automatic order set based on risk. The objective of this study was to further understanding of the facilitators of usability and to evaluate the types of additional information gained from proceeding to "Near Live" testing after completing "Think Aloud". This was a qualitative observational study conducted at a large academic health care system with 12 primary care providers. During "Think Aloud" testing, participants were provided with written clinical scenarios and asked to verbalize their thought process while interacting with the tool. During "Near Live" testing participants interacted with a mock patient. Morae usability software was used to record full screen capture and audio during every session. Participant comments were placed into coding categories and analyzed for generalizable themes. Themes were compared across usability methods. "Think Aloud" and "Near Live" usability testing generated similar themes under the coding categories visibility, workflow, content, understand-ability and navigation. However, they generated significantly different themes under the coding categories usability, practical usefulness and medical usefulness. During both types of testing participants found the tool easier to use when important text was distinct in its appearance, alerts were passive and appropriately timed, content was up to date, language was clear and simple, and each component of the tool included obvious indicators of next steps. Participant comments reflected higher expectations for usability and usefulness during "Near Live" testing. For example, visit aids, such as automatically generated order sets, were felt to be less useful during "Near-Live" testing because they would not be all inclusive for the visit. These complementary types of usability testing generated unique and generalizable insights. Feedback during "Think Aloud" testing primarily helped to improve the tools' ease of use. The additional feedback from "Near Live" testing, which mimics a real clinical encounter, was helpful for eliciting key barriers and facilitators to provider workflow and adoption. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Harnessing Scientific Literature Reports for Pharmacovigilance

    PubMed Central

    Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier

    2017-01-01

    Summary Objectives We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers’ capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. Methods A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. Results All usability test participants cited the tool’s ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool’s automated literature search relative to a manual ‘all fields’ PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Conclusions Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction. PMID:28326432

  1. Generating DEM from LIDAR data - comparison of available software tools

    NASA Astrophysics Data System (ADS)

    Korzeniowska, K.; Lacka, M.

    2011-12-01

    In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.

  2. Automated Testcase Generation for Numerical Support Functions in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Schnieder, Stefan-Alexander

    2014-01-01

    We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.

  3. Software for Automated Testing of Mission-Control Displays

    NASA Technical Reports Server (NTRS)

    OHagan, Brian

    2004-01-01

    MCC Display Cert Tool is a set of software tools for automated testing of computerterminal displays in spacecraft mission-control centers, including those of the space shuttle and the International Space Station. This software makes it possible to perform tests that are more thorough, take less time, and are less likely to lead to erroneous results, relative to tests performed manually. This software enables comparison of two sets of displays to report command and telemetry differences, generates test scripts for verifying telemetry and commands, and generates a documentary record containing display information, including version and corrective-maintenance data. At the time of reporting the information for this article, work was continuing to add a capability for validation of display parameters against a reconfiguration file.

  4. Usability Testing of a National Substance Use Screening Tool Embedded in Electronic Health Records.

    PubMed

    Press, Anne; DeStio, Catherine; McCullagh, Lauren; Kapoor, Sandeep; Morley, Jeanne; Conigliaro, Joseph

    2016-07-08

    Screening, brief intervention, and referral to treatment (SBIRT) is currently being implemented into health systems nationally via paper and electronic methods. The purpose of this study was to evaluate the integration of an electronic SBIRT tool into an existing paper-based SBIRT clinical workflow in a patient-centered medical home. Usability testing was conducted in an academic ambulatory clinic. Two rounds of usability testing were done with medical office assistants (MOAs) using a paper and electronic version of the SBIRT tool, with two and four participants, respectively. Qualitative and quantitative data was analyzed to determine the impact of both tools on clinical workflow. A second round of usability testing was done with the revised electronic version and compared with the first version. Personal workflow barriers cited in the first round of testing were that the electronic health record (EHR) tool was disruptive to patient's visits. In Round 2 of testing, MOAs reported favoring the electronic version due to improved layout and the inclusion of an alert system embedded in the EHR. For example, using the system usability scale (SUS), MOAs reported a grade "1" for the statement, "I would like to use this system frequently" during the first round of testing but a "5" during the second round of analysis. The importance of testing usability of various mediums of tools used in health care screening is highlighted by the findings of this study. In the first round of testing, the electronic tool was reported as less user friendly, being difficult to navigate, and time consuming. Many issues faced in the first generation of the tool were improved in the second generation after usability was evaluated. This study demonstrates how usability testing of an electronic SBRIT tool can help to identify challenges that can impact clinical workflow. However, a limitation of this study was the small sample size of MOAs that participated. The results may have been biased to Northwell Health workers' perceptions of the SBIRT tool and their specific clinical workflow.

  5. NEXT GENERATION ANALYSIS SOFTWARE FOR COMPONENT EVALUATION - Results of Rotational Seismometer Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, D. M.; Merchant, B. J.; Abbott, R. E.

    2012-12-01

    The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.

  6. Development and validation of a tool to evaluate the quality of medical education websites in pathology.

    PubMed

    Alyusuf, Raja H; Prasad, Kameshwar; Abdel Satir, Ali M; Abalkhail, Ali A; Arora, Roopa K

    2013-01-01

    The exponential use of the internet as a learning resource coupled with varied quality of many websites, lead to a need to identify suitable websites for teaching purposes. The aim of this study is to develop and to validate a tool, which evaluates the quality of undergraduate medical educational websites; and apply it to the field of pathology. A tool was devised through several steps of item generation, reduction, weightage, pilot testing, post-pilot modification of the tool and validating the tool. Tool validation included measurement of inter-observer reliability; and generation of criterion related, construct related and content related validity. The validated tool was subsequently tested by applying it to a population of pathology websites. Reliability testing showed a high internal consistency reliability (Cronbach's alpha = 0.92), high inter-observer reliability (Pearson's correlation r = 0.88), intraclass correlation coefficient = 0.85 and κ =0.75. It showed high criterion related, construct related and content related validity. The tool showed moderately high concordance with the gold standard (κ =0.61); 92.2% sensitivity, 67.8% specificity, 75.6% positive predictive value and 88.9% negative predictive value. The validated tool was applied to 278 websites; 29.9% were rated as recommended, 41.0% as recommended with caution and 29.1% as not recommended. A systematic tool was devised to evaluate the quality of websites for medical educational purposes. The tool was shown to yield reliable and valid inferences through its application to pathology websites.

  7. Automated Test Case Generation for an Autopilot Requirement Prototype

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  8. Measurement of Spindle Rigidity by using a Magnet Loader

    NASA Astrophysics Data System (ADS)

    Yamazaki, Taku; Matsubara, Atsushi; Fujita, Tomoya; Muraki, Toshiyuki; Asano, Kohei; Kawashima, Kazuyuki

    The static rigidity of a rotating spindle in the radial direction is investigated in this research. A magnetic loading device (magnet loader) has been developed for the measurement. The magnet loader, which has coils and iron cores, generates the electromagnetic force and attracts a dummy tool attached to the spindle. However, the eddy current is generated in the dummy tool with the spindle rotation and reduces the attractive force at high spindle speed. In order to understand the magnetic flux and eddy current in the dummy tool, the electromagnetic field analysis by FEM was carried out. Grooves on the attraction surface of the dummy tool were designed to cut the eddy current flow. The dimension of the groove were decided based on the FEM analysis, and the designed tool were manufactured and tested. The test result shows that the designed tool successfully reduces the eddy current and recovers the attractive force. By using the magnet loader and the grooved tool, the spindle rigidity can be measured when the spindle rotates with a speed up to 10,000 min-1.

  9. Full scale wind turbine test of vortex generators mounted on the entire blade

    NASA Astrophysics Data System (ADS)

    Bak, Christian; Skrzypiński, Witold; Gaunaa, Mac; Villanueva, Hector; Brønnum, Niels F.; Kruse, Emil K.

    2016-09-01

    Measurements on a heavily instrumented pitch regulated variable speed Vestas V52 850 kW wind turbine situated at the DTU Risø Campus are carried out, where the effect of vortex generators mounted on almost the entire blade is tested with and without leading edge roughness. The measurements are compared to the predictions carried out by a developed design tool, where the effect of vortex generators and leading edge roughness is simulated using engineering models. The measurements showed that if vortex generators are mounted there is an increase in flapwise blade moments if the blades are clean, but also that the loads are almost neutral when vortex generators are installed if there is leading edge roughness on the blades. Finally, it was shown that there was a good agreement between the measurements and the predictions from the design tool.

  10. A novel modification of the Turing test for artificial intelligence and robotics in healthcare.

    PubMed

    Ashrafian, Hutan; Darzi, Ara; Athanasiou, Thanos

    2015-03-01

    The increasing demands of delivering higher quality global healthcare has resulted in a corresponding expansion in the development of computer-based and robotic healthcare tools that rely on artificially intelligent technologies. The Turing test was designed to assess artificial intelligence (AI) in computer technology. It remains an important qualitative tool for testing the next generation of medical diagnostics and medical robotics. Development of quantifiable diagnostic accuracy meta-analytical evaluative techniques for the Turing test paradigm. Modification of the Turing test to offer quantifiable diagnostic precision and statistical effect-size robustness in the assessment of AI for computer-based and robotic healthcare technologies. Modification of the Turing test to offer robust diagnostic scores for AI can contribute to enhancing and refining the next generation of digital diagnostic technologies and healthcare robotics. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  12. Reviews of Instructional Software in Scholarly Journals: A Selected Bibliography.

    ERIC Educational Resources Information Center

    Bantz, David A.; And Others

    This bibliography lists reviews of more than 100 instructional software packages, which are arranged alphabetically by discipline. Information provided for each entry includes the topical emphasis, type of software (i.e., simulation, tutorial, analysis tool, test generator, database, writing tool, drill, plotting tool, videodisc), the journal…

  13. Experimental Stage Separation Tool Development in NASA Langley's Aerothermodynamics Laboratory

    NASA Technical Reports Server (NTRS)

    Murphy, Kelly J.; Scallion, William I.

    2005-01-01

    As part of the research effort at NASA in support of the stage separation and ascent aerothermodynamics research program, proximity testing of a generic bimese wing-body configuration was conducted in NASA Langley's Aerothermodynamics Laboratory in the 20-Inch Mach 6 Air Tunnel. The objective of this work is the development of experimental tools and testing methodologies to apply to hypersonic stage separation problems for future multi-stage launch vehicle systems. Aerodynamic force and moment proximity data were generated at a nominal Mach number of 6 over a small range of angles of attack. The generic bimese configuration was tested in a belly-to-belly and back-to-belly orientation at 86 relative proximity locations. Over 800 aerodynamic proximity data points were taken to serve as a database for code validation. Longitudinal aerodynamic data generated in this test program show very good agreement with viscous computational predictions. Thus a framework has been established to study separation problems in the hypersonic regime using coordinated experimental and computational tools.

  14. Summary of CPAS Gen II Parachute Analysis

    NASA Technical Reports Server (NTRS)

    Morris, Aaron L.; Bledsoe, Kristin J.; Fraire, Usbaldo, Jr.; Moore, James W.; Olson, Leah M.; Ray, Eric

    2011-01-01

    The Orion spacecraft is currently under development by NASA and Lockheed Martin. Like Apollo, Orion will use a series of parachutes to slow its descent and splashdown safely. The Orion parachute system, known as the CEV Parachute Assembly System (CPAS), is being designed by NASA, the Engineering and Science Contract Group (ESCG), and Airborne Systems. The first generation (Gen I) of CPAS testing consisted of thirteen tests and was executed in the 2007-2008 timeframe. The Gen I tests provided an initial understanding of the CPAS parachutes. Knowledge gained from Gen I testing was used to plan the second generation of testing (Gen II). Gen II consisted of six tests: three singleparachute tests, designated as Main Development Tests, and three Cluster Development Tests. Gen II required a more thorough investigation into parachute performance than Gen I. Higher fidelity instrumentation, enhanced analysis methods and tools, and advanced test techniques were developed. The results of the Gen II test series are being incorporated into the CPAS design. Further testing and refinement of the design and model of parachute performance will occur during the upcoming third generation of testing (Gen III). This paper will provide an overview of the developments in CPAS analysis following the end of Gen I, including descriptions of new tools and techniques as well as overviews of the Gen II tests.

  15. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  16. Martian Atmospheric Pressure Static Charge Elimination Tool

    NASA Technical Reports Server (NTRS)

    Johansen, Michael R.

    2014-01-01

    A Martian pressure static charge elimination tool is currently in development in the Electrostatics and Surface Physics Laboratory (ESPL) at NASA's Kennedy Space Center. In standard Earth atmosphere conditions, static charge can be neutralized from an insulating surface using air ionizers. These air ionizers generate ions through corona breakdown. The Martian atmosphere is 7 Torr of mostly carbon dioxide, which makes it inherently difficult to use similar methods as those used for standard atmosphere static elimination tools. An initial prototype has been developed to show feasibility of static charge elimination at low pressure, using corona discharge. A needle point and thin wire loop are used as the corona generating electrodes. A photo of the test apparatus is shown below. Positive and negative high voltage pulses are sent to the needle point. This creates positive and negative ions that can be used for static charge neutralization. In a preliminary test, a floating metal plate was charged to approximately 600 volts under Martian atmospheric conditions. The static elimination tool was enabled and the voltage on the metal plate dropped rapidly to -100 volts. This test data is displayed below. Optimization is necessary to improve the electrostatic balance of the static elimination tool.

  17. Development and validation of a tool to evaluate the quality of medical education websites in pathology

    PubMed Central

    Alyusuf, Raja H.; Prasad, Kameshwar; Abdel Satir, Ali M.; Abalkhail, Ali A.; Arora, Roopa K.

    2013-01-01

    Background: The exponential use of the internet as a learning resource coupled with varied quality of many websites, lead to a need to identify suitable websites for teaching purposes. Aim: The aim of this study is to develop and to validate a tool, which evaluates the quality of undergraduate medical educational websites; and apply it to the field of pathology. Methods: A tool was devised through several steps of item generation, reduction, weightage, pilot testing, post-pilot modification of the tool and validating the tool. Tool validation included measurement of inter-observer reliability; and generation of criterion related, construct related and content related validity. The validated tool was subsequently tested by applying it to a population of pathology websites. Results and Discussion: Reliability testing showed a high internal consistency reliability (Cronbach's alpha = 0.92), high inter-observer reliability (Pearson's correlation r = 0.88), intraclass correlation coefficient = 0.85 and κ =0.75. It showed high criterion related, construct related and content related validity. The tool showed moderately high concordance with the gold standard (κ =0.61); 92.2% sensitivity, 67.8% specificity, 75.6% positive predictive value and 88.9% negative predictive value. The validated tool was applied to 278 websites; 29.9% were rated as recommended, 41.0% as recommended with caution and 29.1% as not recommended. Conclusion: A systematic tool was devised to evaluate the quality of websites for medical educational purposes. The tool was shown to yield reliable and valid inferences through its application to pathology websites. PMID:24392243

  18. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  19. FY16 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2016-09-30

    The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less

  20. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  1. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  2. Preparing and Analyzing Iced Airfoils

    NASA Technical Reports Server (NTRS)

    Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Cotton, Barbara J.; Choo, Yung K.; Coroneos, Rula M.; Pennline, James A.; Hackenberg, Anthony W.; Schilling, Herbert W.; Slater, John W.; hide

    2004-01-01

    SmaggIce version 1.2 is a computer program for preparing and analyzing iced airfoils. It includes interactive tools for (1) measuring ice-shape characteristics, (2) controlled smoothing of ice shapes, (3) curve discretization, (4) generation of artificial ice shapes, and (5) detection and correction of input errors. Measurements of ice shapes are essential for establishing relationships between characteristics of ice and effects of ice on airfoil performance. The shape-smoothing tool helps prepare ice shapes for use with already available grid-generation and computational-fluid-dynamics software for studying the aerodynamic effects of smoothed ice on airfoils. The artificial ice-shape generation tool supports parametric studies since ice-shape parameters can easily be controlled with the artificial ice. In such studies, artificial shapes generated by this program can supplement simulated ice obtained from icing research tunnels and real ice obtained from flight test under icing weather condition. SmaggIce also automatically detects geometry errors such as tangles or duplicate points in the boundary which may be introduced by digitization and provides tools to correct these. By use of interactive tools included in SmaggIce version 1.2, one can easily characterize ice shapes and prepare iced airfoils for grid generation and flow simulations.

  3. Advancing Exposure Characterization for Chemical Evaluation and Risk Assessment

    EPA Science Inventory

    A new generation of scientific tools has emerged to rapidly measure signals from cells, tissues, and organisms following exposure to chemicals. High-visibility efforts to apply these tools for efficient toxicity testing raise important research questions in exposure science. As v...

  4. Evaluating and Refining High Throughput Tools for Toxicokinetics

    EPA Science Inventory

    This poster summarizes efforts of the Chemical Safety for Sustainability's Rapid Exposure and Dosimetry (RED) team to facilitate the development and refinement of toxicokinetics (TK) tools to be used in conjunction with the high throughput toxicity testing data generated as a par...

  5. Simulation of laser generated ultrasound with application to defect detection

    NASA Astrophysics Data System (ADS)

    Pantano, A.; Cerniglia, D.

    2008-06-01

    Laser generated ultrasound holds substantial promise for use as a tool for defect detection in remote inspection thanks to its ability to produce frequencies in the MHz range, enabling fine spatial resolution of defects. Despite the potential impact of laser generated ultrasound in many areas of science and industry, robust tools for studying the phenomenon are lacking and thus limit the design and optimization of non-destructive testing and evaluation techniques. The laser generated ultrasound propagation in complex structures is an intricate phenomenon and is extremely hard to analyze. Only simple geometries can be studied analytically. Numerical techniques found in the literature have proved to be limited in their applicability, by the frequencies in the MHz range and very short wavelengths. The objective of this research is to prove that by using an explicit integration rule together with diagonal element mass matrices, instead of the almost universally adopted implicit integration rule to integrate the equations of motion in a dynamic analysis, it is possible to efficiently and accurately solve ultrasound wave propagation problems with frequencies in the MHz range travelling in relatively large bodies. Presented results on NDE testing of rails demonstrate that the proposed FE technique can provide a valuable tool for studying the laser generated ultrasound propagation.

  6. Eddy current inspection tool. [Patent application

    DOEpatents

    Petrini, R.R.; Van Lue, D.F.

    1980-10-29

    A miniaturized inspection tool, for testing and inspection of metal objects in locations with difficult accessibility, which comprises eddy current sensing equipment with a probe coil, and associated coaxial coil cable, oil energizing means, and circuit means responsive to impedance changes in the coil as effected by induced eddy currents in a test object to produce a data output signal proportional to such changes. The coil and cable are slideably received in the utility channel of the flexible insertion tube of a fiberoptic scope. The scope is provided with light transmitting and receiving fiberoptics for viewing through the flexible tube, and articulation means for articulating the distal end of the tube and permitting close control of coil placement relative to a test object. The eddy current sensing equipment includes a tone generator for generating audible signals responsive to the data output signal. In one selected mode of operation, the tone generator responsive to the output signal above a selected level generates a constant single frequency tone for signalling detection of a discontinuity and, in a second selected mode, generates a tone whose frequency is proportional to the difference between the output signal and a predetermined selected threshold level.

  7. Modeling Potential Carbon Monoxide Exposure Due to Operation of a Major Rocket Engine Altitude Test Facility Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Blotzer, Michael J.; Woods, Jody L.

    2009-01-01

    This viewgraph presentation reviews computational fluid dynamics as a tool for modelling the dispersion of carbon monoxide at the Stennis Space Center's A3 Test Stand. The contents include: 1) Constellation Program; 2) Constellation Launch Vehicles; 3) J2X Engine; 4) A-3 Test Stand; 5) Chemical Steam Generators; 6) Emission Estimates; 7) Located in Existing Test Complex; 8) Computational Fluid Dynamics; 9) Computational Tools; 10) CO Modeling; 11) CO Model results; and 12) Next steps.

  8. Conceptual Systems Model as a Tool for Hypothesis Generation and Testing in Ecotoxicological Research

    EPA Science Inventory

    Microarray, proteomic, and metabonomic technologies are becoming increasingly accessible as tools for ecotoxicology research. Effective use of these technologies will depend, at least in part, on the ability to apply these techniques within a paradigm of hypothesis driven researc...

  9. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  10. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  11. Nearly arc-length tool path generation and tool radius compensation algorithm research in FTS turning

    NASA Astrophysics Data System (ADS)

    Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao

    2014-08-01

    In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.

  12. Trial Promoter: A Web-Based Tool for Boosting the Promotion of Clinical Research Through Social Media.

    PubMed

    Reuter, Katja; Ukpolo, Francis; Ward, Edward; Wilson, Melissa L; Angyan, Praveen

    2016-06-29

    Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online.

  13. Trial Promoter: A Web-Based Tool for Boosting the Promotion of Clinical Research Through Social Media

    PubMed Central

    Ukpolo, Francis; Ward, Edward; Wilson, Melissa L

    2016-01-01

    Background Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. Objective To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Methods Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. Results During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Conclusions Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online. PMID:27357424

  14. TTCN-3 Based Conformance Testing of Mobile Broadcast Business Management System in 3G Networks

    NASA Astrophysics Data System (ADS)

    Wang, Zhiliang; Yin, Xia; Xiang, Yang; Zhu, Ruiping; Gao, Shirui; Wu, Xin; Liu, Shijian; Gao, Song; Zhou, Li; Li, Peng

    Mobile broadcast service is one of the emerging most important new services in 3G networks. To better operate and manage mobile broadcast services, mobile broadcast business management system (MBBMS) should be designed and developed. Such a system, with its distributed nature, complicated XML data and security mechanism, faces many challenges in testing technology. In this paper, we study the conformance testing methodology of MBBMS, and design and implement a MBBMS protocol conformance testing tool based on TTCN-3, a standardized test description language that can be used in black-box testing of reactive and distributed system. In this methodology and testing tool, we present a semi-automatic XML test data generation method of TTCN-3 test suite and use HMSC model to help the design of test suite. In addition, we also propose an integrated testing method for hierarchical MBBMS security architecture. This testing tool has been used in industrial level’s testing.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano

    The scientific objectives of the LISA Technology Package experiment on board of the LISA Pathfinder mission demand accurate calibration and validation of the data analysis tools in advance of the mission launch. The level of confidence required in the mission outcomes can be reached only by intensively testing the tools on synthetically generated data. A flexible procedure allowing the generation of a cross-correlated stationary noise time series was set up. A multichannel time series with the desired cross-correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure comprises a noisemore » coloring, multichannel filter designed via a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a subsequent fit in the Z domain. The common problem of initial transients in a filtered time series is solved with a proper initialization of the filter recursion equations. The noise generator performance was tested in a two-dimensional case study of the closed-loop LISA Technology Package dynamics along the two principal degrees of freedom.« less

  16. Advanced composite rudders for DC-10 aircraft: Design, manufacturing, and ground tests

    NASA Technical Reports Server (NTRS)

    Lehman, G. M.; Purdy, D. M.; Cominsky, A.; Hawley, A. V.; Amason, M. P.; Kung, J. T.; Palmer, R. J.; Purves, N. B.; Marra, P. J.; Hancock, G. R.

    1976-01-01

    Design synthesis, tooling and process development, manufacturing, and ground testing of a graphite epoxy rudder for the DC-10 commercial transport are discussed. The composite structure was fabricated using a unique processing method in which the thermal expansion characteristics of rubber tooling mandrels were used to generate curing pressures during an oven cure cycle. The ground test program resulted in certification of the rudder for passenger-carrying flights. Results of the structural and environmental tests are interpreted and detailed development of the rubber tooling and manufacturing process is described. Processing, tooling, and manufacturing problems encountered during fabrication of four development rudders and ten flight-service rudders are discussed and the results of corrective actions are described. Non-recurring and recurring manufacturing labor man-hours are tabulated at the detailed operation level. A weight reduction of 13.58 kg (33 percent) was attained in the composite rudder.

  17. Eddy current inspection tool which is selectively operable in a discontinuity detection mode and a discontinuity magnitude mode

    DOEpatents

    Petrini, Richard R.; Van Lue, Dorin F.

    1983-01-01

    A miniaturized inspection tool, for testing and inspection of metal objects in locations with difficult accessibility, which comprises eddy current sensing equipment (12) with a probe coil (11), and associated coaxial coil cable (13), coil energizing means (21), and circuit means (21, 12) responsive to impedance changes in the coil as effected by induced eddy currents in a test object to produce a data output signal proportional to such changes. The coil and cable are slideably received in the utility channel of the flexible insertion tube 17 of fiberoptic scope 10. The scope 10 is provided with light transmitting and receiving fiberoptics for viewing through the flexible tube, and articulation means (19, 20) for articulating the distal end of the tube and permitting close control of coil placement relative to a test object. The eddy current sensing equipment includes a tone generator 30 for generating audibly signals responsive to the data output signal. In one selected mode of operation, the tone generator responsive to the output signal above a selected level generates a constant single frequency tone for signalling detection of a discontinuity and, in a second selected mode, generates a tone whose frequency is proportional to the difference between the output signal and a predetermined selected threshold level.

  18. Eddy current inspection tool which is selectively operable in a discontinuity detection mode and a discontinuity magnitude mode

    DOEpatents

    Petrini, R.R.; Van Lue, D.F.

    1983-10-25

    A miniaturized inspection tool, for testing and inspection of metal objects in locations with difficult accessibility, which comprises eddy current sensing equipment with a probe coil, and associated coaxial coil cable, coil energizing means, and circuit means responsive to impedance changes in the coil as effected by induced eddy currents in a test object to produce a data output signal proportional to such changes. The coil and cable are slideably received in the utility channel of the flexible insertion tube of fiberoptic scope. The scope is provided with light transmitting and receiving fiberoptics for viewing through the flexible tube, and articulation means for articulating the distal end of the tube and permitting close control of coil placement relative to a test object. The eddy current sensing equipment includes a tone generator 30 for generating audibly signals responsive to the data output signal. In one selected mode of operation, the tone generator responsive to the output signal above a selected level generates a constant single frequency tone for signaling detection of a discontinuity and, in a second selected mode, generates a tone whose frequency is proportional to the difference between the output signal and a predetermined selected threshold level. 5 figs.

  19. A decision support tool for landfill methane generation and gas collection.

    PubMed

    Emkes, Harriet; Coulon, Frédéric; Wagland, Stuart

    2015-09-01

    This study presents a decision support tool (DST) to enhance methane generation at individual landfill sites. To date there is no such tool available to provide landfill decision makers with clear and simplified information to evaluate biochemical processes within a landfill site, to assess performance of gas production and to identify potential remedies to any issues. The current lack in understanding stems from the complexity of the landfill waste degradation process. Two scoring sets for landfill gas production performance are calculated with the tool: (1) methane output score which measures the deviation of the actual methane output rate at each site which the prediction generated by the first order decay model LandGEM; and (2) landfill gas indicators' score, which measures the deviation of the landfill gas indicators from their ideal ranges for optimal methane generation conditions. Landfill gas indicators include moisture content, temperature, alkalinity, pH, BOD, COD, BOD/COD ratio, ammonia, chloride, iron and zinc. A total landfill gas indicator score is provided using multi-criteria analysis to calculate the sum of weighted scores for each indicator. The weights for each indicator are calculated using an analytical hierarchical process. The tool is tested against five real scenarios for landfill sites in UK with a range of good, average and poor landfill methane generation over a one year period (2012). An interpretation of the results is given for each scenario and recommendations are highlighted for methane output rate enhancement. Results demonstrate how the tool can help landfill managers and operators to enhance their understanding of methane generation at a site-specific level, track landfill methane generation over time, compare and rank sites, and identify problems areas within a landfill site. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A survey of tools for variant analysis of next-generation genome sequencing data

    PubMed Central

    Pabinger, Stephan; Dander, Andreas; Fischer, Maria; Snajder, Rene; Sperk, Michael; Efremova, Mirjana; Krabichler, Birgit; Speicher, Michael R.; Zschocke, Johannes

    2014-01-01

    Recent advances in genome sequencing technologies provide unprecedented opportunities to characterize individual genomic landscapes and identify mutations relevant for diagnosis and therapy. Specifically, whole-exome sequencing using next-generation sequencing (NGS) technologies is gaining popularity in the human genetics community due to the moderate costs, manageable data amounts and straightforward interpretation of analysis results. While whole-exome and, in the near future, whole-genome sequencing are becoming commodities, data analysis still poses significant challenges and led to the development of a plethora of tools supporting specific parts of the analysis workflow or providing a complete solution. Here, we surveyed 205 tools for whole-genome/whole-exome sequencing data analysis supporting five distinct analytical steps: quality assessment, alignment, variant identification, variant annotation and visualization. We report an overview of the functionality, features and specific requirements of the individual tools. We then selected 32 programs for variant identification, variant annotation and visualization, which were subjected to hands-on evaluation using four data sets: one set of exome data from two patients with a rare disease for testing identification of germline mutations, two cancer data sets for testing variant callers for somatic mutations, copy number variations and structural variations, and one semi-synthetic data set for testing identification of copy number variations. Our comprehensive survey and evaluation of NGS tools provides a valuable guideline for human geneticists working on Mendelian disorders, complex diseases and cancers. PMID:23341494

  1. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  2. Cutting force measurement of electrical jigsaw by strain gauges

    NASA Astrophysics Data System (ADS)

    Kazup, L.; Varadine Szarka, A.

    2016-11-01

    This paper describes a measuring method based on strain gauges for accurate specification of electric jigsaw's cutting force. The goal of the measurement is to provide an overall perspective about generated forces in a jigsaw's gearbox during a cutting period. The lifetime of the tool is affected by these forces primarily. This analysis is part of the research and development project aiming to develop a special linear magnetic brake for realizing automatic lifetime tests of electric jigsaws or similar handheld tools. The accurate specification of cutting force facilitates to define realistic test cycles during the automatic lifetime test. The accuracy and precision resulted by the well described cutting force characteristic and the possibility of automation provide new dimension for lifetime testing of the handheld tools with alternating movement.

  3. Cliffbot Maestro

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Powell, Mark W.; Fox, Jason M.; Crockett, Thomas M.; Joswig, Joseph C.

    2009-01-01

    Cliffbot Maestro permits teleoperation of remote rovers for field testing in extreme environments. The application user interface provides two sets of tools for operations: stereo image browsing and command generation.

  4. A Rapid Assessment Tool for affirming good practice in midwifery education programming.

    PubMed

    Fullerton, Judith T; Johnson, Peter; Lobe, Erika; Myint, Khine Haymar; Aung, Nan Nan; Moe, Thida; Linn, Nay Aung

    2016-03-01

    to design a criterion-referenced assessment tool that could be used globally in a rapid assessment of good practices and bottlenecks in midwifery education programs. a standard tool development process was followed, to generate standards and reference criteria; followed by external review and field testing to document psychometric properties. review of standards and scoring criteria were conducted by stakeholders around the globe. Field testing of the tool was conducted in Myanmar. eleven of Myanmar׳s 22 midwifery education programs participated in the assessment. the clinimetric tool was demonstrated to have content validity and high inter-rater reliability in use. a globally validated tool, and accompanying user guide and handbook are now available for conducting rapid assessments of compliance with good practice criteria in midwifery education programming. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Preliminary design of mesoscale turbocompressor and rotordynamics tests of rotor bearing system

    NASA Astrophysics Data System (ADS)

    Hossain, Md Saddam

    2011-12-01

    A mesoscale turbocompressor spinning above 500,000 RPM is evolutionary technology for micro turbochargers, turbo blowers, turbo compressors, micro-gas turbines, auxiliary power units, etc for automotive, aerospace, and fuel cell industries. Objectives of this work are: (1) to evaluate different air foil bearings designed for the intended applications, and (2) to design & perform CFD analysis of a micro-compressor. CFD analysis of shrouded 3-D micro compressor was conducted using Ansys Bladegen as blade generation tool, ICEM CFD as mesh generation tool, and CFX as main solver for different design and off design cases and also for different number of blades. Comprehensive experimental facilities for testing the turbocompressor system have been also designed and proposed for future work.

  6. Prediction of Acoustic Loads Generated by Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Perez, Linamaria; Allgood, Daniel C.

    2011-01-01

    NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.

  7. The SPARK Tool to prioritise questions for systematic reviews in health policy and systems research: development and initial validation.

    PubMed

    Akl, Elie A; Fadlallah, Racha; Ghandour, Lilian; Kdouh, Ola; Langlois, Etienne; Lavis, John N; Schünemann, Holger; El-Jardali, Fadi

    2017-09-04

    Groups or institutions funding or conducting systematic reviews in health policy and systems research (HPSR) should prioritise topics according to the needs of policymakers and stakeholders. The aim of this study was to develop and validate a tool to prioritise questions for systematic reviews in HPSR. We developed the tool following a four-step approach consisting of (1) the definition of the purpose and scope of tool, (2) item generation and reduction, (3) testing for content and face validity, (4) and pilot testing of the tool. The research team involved international experts in HPSR, systematic review methodology and tool development, led by the Center for Systematic Reviews on Health Policy and Systems Research (SPARK). We followed an inclusive approach in determining the final selection of items to allow customisation to the user's needs. The purpose of the SPARK tool was to prioritise questions in HPSR in order to address them in systematic reviews. In the item generation and reduction phase, an extensive literature search yielded 40 relevant articles, which were reviewed by the research team to create a preliminary list of 19 candidate items for inclusion in the tool. As part of testing for content and face validity, input from international experts led to the refining, changing, merging and addition of new items, and to organisation of the tool into two modules. Following pilot testing, we finalised the tool, with 22 items organised in two modules - the first module including 13 items to be rated by policymakers and stakeholders, and the second including 9 items to be rated by systematic review teams. Users can customise the tool to their needs, by omitting items that may not be applicable to their settings. We also developed a user manual that provides guidance on how to use the SPARK tool, along with signaling questions. We have developed and conducted initial validation of the SPARK tool to prioritise questions for systematic reviews in HPSR, along with a user manual. By aligning systematic review production to policy priorities, the tool will help support evidence-informed policymaking and reduce research waste. We invite others to contribute with additional real-life implementation of the tool.

  8. GridTool: A surface modeling and grid generation tool

    NASA Technical Reports Server (NTRS)

    Samareh-Abolhassani, Jamshid

    1995-01-01

    GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.

  9. Quiet Clean Short-Haul Experimental Engine (QCSEE) Under-The-Wing (UTW) graphite/PMR cowl development

    NASA Technical Reports Server (NTRS)

    Ruggles, C. L.

    1978-01-01

    The PMR process development, tooling concepts, testing conducted to generate materials properties data, and the fabrication of a subscale model of the inner cowl are presented. It was concluded that the materials, processes, and tooling concepts were satisfactory for making an inner cowl with adequate structural integrity.

  10. Applying Adaptive Variables in Computerised Adaptive Testing

    ERIC Educational Resources Information Center

    Triantafillou, Evangelos; Georgiadou, Elissavet; Economides, Anastasios A.

    2007-01-01

    Current research in computerised adaptive testing (CAT) focuses on applications, in small and large scale, that address self assessment, training, employment, teacher professional development for schools, industry, military, assessment of non-cognitive skills, etc. Dynamic item generation tools and automated scoring of complex, constructed…

  11. Dust Emission Induced By Friction Modifications At Tool Chip Interface In Dry Machining In MMCp

    NASA Astrophysics Data System (ADS)

    Kremer, Arnaud; El Mansori, Mohamed

    2011-01-01

    This paper investigates the relationship between dust emission and tribological conditions at the tool-chip interface when machining Metal Matrix composite reinforced with particles (MMCp) in dry mode. Machining generates aerosols that can easily be inhaled by workers. Aerosols may be composed of oil mist, tool material or alloying elements of workpiece material. Bar turning tests were conducted on a 2009 aluminum alloy reinforced with different level of Silicon Carbide particles (15, 25 and 35% of SiCp). Variety of PCD tools and nanostructured diamond coatings were used to analyze their performances on air pollution. A spectrometer was used to detect airborne aerosol particles in the size range between 0.3μm to 20 μm and to sort them in 15 size channels in real time. It was used to compare the effects of test parameters on dust emission. Observations of tool face and chip morphology reveal the importance of friction phenomena. It was demonstrated that level of friction modifies chip curvature and dust emission. The increase of level of reinforcement increase the chip segmentation and decrease the contact length and friction area. A "running in" phenomenon with important dust emission appeared with PCD tool due to the tool rake face flatness. In addition dust generation is more sensitive to edge integrity than power consumption.

  12. A-2 Test Stand modification work

    NASA Image and Video Library

    2010-10-27

    John C. Stennis Space Center employees install a new master interface tool on the A-2 Test Stand on Oct. 27, 2010. Until July 2009, the stand had been used for testing space shuttle main engines. With that test series complete, employees are preparing the stand for testing the next-generation J-2X rocket engine being developed. Testing of the new engine is scheduled to begin in 2011.

  13. Portable spark-gap arc generator

    NASA Technical Reports Server (NTRS)

    Ignaczak, L. R.

    1978-01-01

    Self-contained spark generator that simulates electrical noise caused by discharge of static charge is useful tool when checking sensitive component and equipment. In test set-up, device introduces repeatable noise pulses as behavior of components is monitored. Generator uses only standard commercial parts and weighs only 4 pounds; portable dc power supply is used. Two configurations of generator have been developed: one is free-running arc source, and one delivers spark in response to triggering pulse.

  14. Is there a "net generation" in veterinary medicine? A comparative study on the use of the Internet and Web 2.0 by students and the veterinary profession.

    PubMed

    Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R; Ehlers, Jan P

    2013-01-01

    Informal and formal lifelong learning is essential at university and in the workplace. Apart from classical learning techniques, Web 2.0 tools can be used. It is controversial whether there is a so-called net generation amongst people under 30. To test the hypothesis that a net generation among students and young veterinarians exists. An online survey of students and veterinarians was conducted in the German-speaking countries which was advertised via online media and traditional print media. 1780 people took part in the survey. Students and veterinarians have different usage patterns regarding social networks (91.9% vs. 69%) and IM (55.9% vs. 24.5%). All tools were predominantly used passively and in private, to a lesser extent also professionally and for studying. The use of Web 2.0 tools is useful, however, teaching information and media skills, preparing codes of conduct for the internet and verification of user generated content is essential.

  15. Testing framework for embedded languages

    NASA Astrophysics Data System (ADS)

    Leskó, Dániel; Tejfel, Máté

    2012-09-01

    Embedding a new programming language into an existing one is a widely used technique, because it fastens the development process and gives a part of a language infrastructure for free (e.g. lexical, syntactical analyzers). In this paper we are presenting a new advantage of this development approach regarding to adding testing support for these new languages. Tool support for testing is a crucial point for a newly designed programming language. It could be done in the hard way by creating a testing tool from scratch, or we could try to reuse existing testing tools by extending them with an interface to our new language. The second approach requires less work, and also it fits very well for the embedded approach. The problem is that the creation of such interfaces is not straightforward at all, because the existing testing tools were mostly not designed to be extendable and to be able to deal with new languages. This paper presents an extendable and modular model of a testing framework, in which the most basic design decision was to keep the - previously mentioned - interface creation simple and straightforward. Other important aspects of our model are the test data generation, the oracle problem and the customizability of the whole testing phase.

  16. Slow-oscillatory Transcranial Direct Current Stimulation Modulates Memory in Temporal Lobe Epilepsy by Altering Sleep Spindle Generators: A Possible Rehabilitation Tool.

    PubMed

    Del Felice, Alessandra; Magalini, Alessandra; Masiero, Stefano

    2015-01-01

    Temporal lobe epilepsy (TLE) is often associated with memory deficits. Given the putative role for sleep spindles memory consolidation, spindle generators skewed toward the affected lobe in TLE subjects may be a neurophysiological marker of defective memory. Slow-oscillatory transcranial direct current stimulation (sotDCS) during slow waves sleep (SWS) has previously been shown to enhance sleep-dependent memory consolidation by increasing slow-wave sleep and modulating sleep spindles. To test if anodal sotDCS over the affected TL prior to a nap affects sleep spindles and whether this improves memory consolidation. Randomized controlled cross-over study. 12 people with TLE underwent sotDCS (0.75 Hz; 0-250 μV, 30 min) or sham before daytime nap. Declarative verbal and visuospatial learning were tested. Fast and slow spindle signals were recorded by 256-channel EEG during sleep. In both study arms, electrical source imaging (ESI) localized cortical generators. Neuropsychological data were analyzed with general linear model statistics or the Kruskal-Wallis test (P or Z < 0.05), and neurophysiological data tested with the Mann-Whitney t test and binomial distribution test (P or Z < 0.05). An improvement in declarative (P = 0.05) and visuospatial memory performance (P = 0.048) emerged after sotDCS. SotDCS increased slow spindle generators current density (Z = 0.001), with a shift to the anterior cortical areas. Anodal sotDCS over the affected temporal lobe improves declarative and visuospatial memory performance by modulating slow sleep spindles cortical source generators. SotDCS appears a promising tool for memory rehabilitation in people with TLE. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Development of a Novel Quantitative Adverse Outcome Pathway Predictive Model for Lung Cancer

    EPA Science Inventory

    Traditional methods for carcinogenicity testing are resource-intensive, retrospective, and time consuming. An increasing testing burden has generated interest in the adverse outcome pathway (AOP) concept as a tool to evaluate chemical safety in a more efficient, rapid and effecti...

  18. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  19. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  20. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  1. Using Concepts in Literature-based Discovery: Simulating Swanson's Raynaud-Fish Oil and Migraine-Magnesium Discoveries.

    ERIC Educational Resources Information Center

    Weeber, Marc; Klein, Henny; de Jong-van den Berg, Lolkje T. W.; Vos, Rein

    2001-01-01

    Proposes a two-step model of discovery in which new scientific hypotheses can be generated and subsequently tested. Applying advanced natural language processing techniques to find biomedical concepts in text, the model is implemented in a versatile interactive discovery support tool. This tool is used to successfully simulate Don R. Swanson's…

  2. Extraction and Analysis of Display Data

    NASA Technical Reports Server (NTRS)

    Land, Chris; Moye, Kathryn

    2008-01-01

    The Display Audit Suite is an integrated package of software tools that partly automates the detection of Portable Computer System (PCS) Display errors. [PCS is a lap top computer used onboard the International Space Station (ISS).] The need for automation stems from the large quantity of PCS displays (6,000+, with 1,000,000+ lines of command and telemetry data). The Display Audit Suite includes data-extraction tools, automatic error detection tools, and database tools for generating analysis spread sheets. These spread sheets allow engineers to more easily identify many different kinds of possible errors. The Suite supports over 40 independent analyses, 16 NASA Tech Briefs, November 2008 and complements formal testing by being comprehensive (all displays can be checked) and by revealing errors that are difficult to detect via test. In addition, the Suite can be run early in the development cycle to find and correct errors in advance of testing.

  3. Introduction of the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Costing Tool: a user-friendly spreadsheet program to estimate costs of providing patient-centered interventions.

    PubMed

    Reed, Shelby D; Li, Yanhong; Kamble, Shital; Polsky, Daniel; Graham, Felicia L; Bowers, Margaret T; Samsa, Gregory P; Paul, Sara; Schulman, Kevin A; Whellan, David J; Riegel, Barbara J

    2012-01-01

    Patient-centered health care interventions, such as heart failure disease management programs, are under increasing pressure to demonstrate good value. Variability in costing methods and assumptions in economic evaluations of such interventions limit the comparability of cost estimates across studies. Valid cost estimation is critical to conducting economic evaluations and for program budgeting and reimbursement negotiations. Using sound economic principles, we developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Costing Tool, a spreadsheet program that can be used by researchers and health care managers to systematically generate cost estimates for economic evaluations and to inform budgetary decisions. The tool guides users on data collection and cost assignment for associated personnel, facilities, equipment, supplies, patient incentives, miscellaneous items, and start-up activities. The tool generates estimates of total program costs, cost per patient, and cost per week and presents results using both standardized and customized unit costs for side-by-side comparisons. Results from pilot testing indicated that the tool was well-formatted, easy to use, and followed a logical order. Cost estimates of a 12-week exercise training program in patients with heart failure were generated with the costing tool and were found to be consistent with estimates published in a recent study. The TEAM-HF Costing Tool could prove to be a valuable resource for researchers and health care managers to generate comprehensive cost estimates of patient-centered interventions in heart failure or other conditions for conducting high-quality economic evaluations and making well-informed health care management decisions.

  4. SU-E-J-92: Validating Dose Uncertainty Estimates Produced by AUTODIRECT, An Automated Program to Evaluate Deformable Image Registration Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less

  5. Generation of GHS Scores from TEST and online sources

    EPA Science Inventory

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat tox...

  6. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    NASA Astrophysics Data System (ADS)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  7. Probability of a false-negative HIV antibody test result during the window period: a tool for pre- and post-test counselling.

    PubMed

    Taylor, Darlene; Durigon, Monica; Davis, Heather; Archibald, Chris; Konrad, Bernhard; Coombs, Daniel; Gilbert, Mark; Cook, Darrel; Krajden, Mel; Wong, Tom; Ogilvie, Gina

    2015-03-01

    Failure to understand the risk of false-negative HIV test results during the window period results in anxiety. Patients typically want accurate test results as soon as possible while clinicians prefer to wait until the probability of a false-negative is virtually nil. This review summarizes the median window periods for third-generation antibody and fourth-generation HIV tests and provides the probability of a false-negative result for various days post-exposure. Data were extracted from published seroconversion panels. A 10-day eclipse period was used to estimate days from infection to first detection of HIV RNA. Median (interquartile range) days to seroconversion were calculated and probabilities of a false-negative result at various time periods post-exposure are reported. The median (interquartile range) window period for third-generation tests was 22 days (19-25) and 18 days (16-24) for fourth-generation tests. The probability of a false-negative result is 0.01 at 80 days' post-exposure for third-generation tests and at 42 days for fourth-generation tests. The table of probabilities of falsely-negative HIV test results may be useful during pre- and post-test HIV counselling to inform co-decision making regarding the ideal time to test for HIV. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Guidelines for testing and release procedures

    NASA Technical Reports Server (NTRS)

    Molari, R.; Conway, M.

    1984-01-01

    Guidelines and procedures are recommended for the testing and release of the types of computer software efforts commonly performed at NASA/Ames Research Center. All recommendations are based on the premise that testing and release activities must be specifically selected for the environment, size, and purpose of each individual software project. Guidelines are presented for building a Test Plan and using formal Test Plan and Test Care Inspections on it. Frequent references are made to NASA/Ames Guidelines for Software Inspections. Guidelines are presented for selecting an Overall Test Approach and for each of the four main phases of testing: (1) Unit Testing of Components, (2) Integration Testing of Components, (3) System Integration Testing, and (4) Acceptance Testing. Tools used for testing are listed, including those available from operating systems used at Ames, specialized tools which can be developed, unit test drivers, stub module generators, and the use of format test reporting schemes.

  9. Telemetry-Enhancing Scripts

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.

    2009-01-01

    Scripts Providing a Cool Kit of Telemetry Enhancing Tools (SPACKLE) is a set of software tools that fill gaps in capabilities of other software used in processing downlinked data in the Mars Exploration Rovers (MER) flight and test-bed operations. SPACKLE tools have helped to accelerate the automatic processing and interpretation of MER mission data, enabling non-experts to understand and/or use MER query and data product command simulation software tools more effectively. SPACKLE has greatly accelerated some operations and provides new capabilities. The tools of SPACKLE are written, variously, in Perl or the C or C++ language. They perform a variety of search and shortcut functions that include the following: Generating text-only, Event Report-annotated, and Web-enhanced views of command sequences; Labeling integer enumerations with their symbolic meanings in text messages and engineering channels; Systematic detecting of corruption within data products; Generating text-only displays of data-product catalogs including downlink status; Validating and labeling of commands related to data products; Performing of convenient searches of detailed engineering data spanning multiple Martian solar days; Generating tables of initial conditions pertaining to engineering, health, and accountability data; Simplified construction and simulation of command sequences; and Fast time format conversions and sorting.

  10. AN AUTOMATED SYSTEM FOR PRODUCING UNIFORM SURFACE DEPOSITS OF DRY PARTICLES

    EPA Science Inventory

    A laboratory system has been constructed that uniformly deposits dry particles onto any type of test surface. Devised as a quality assurance tool for the purpose of evaluating surface sampling methods for lead, it also may be used to generate test surfaces for any contaminant ...

  11. Generation of Alternative Assessment Scores using TEST and online data sources

    EPA Science Inventory

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat tox...

  12. A single sample GnRHa stimulation test in the diagnosis of precocious puberty

    USDA-ARS?s Scientific Manuscript database

    Gonadotropin-releasing hormone (GnRH) has been the standard test for diagnosing central precocious puberty. Because GnRH is no longer available, GnRH analogues (GnRHa) are now used. Random LH concentration, measured by the third-generation immunochemiluminometric assay, is a useful screening tool ...

  13. Hull Form Design and Optimization Tool Development

    DTIC Science & Technology

    2012-07-01

    global minimum. The algorithm accomplishes this by using a method known as metaheuristics which allows the algorithm to examine a large area by...further development of these tools including the implementation and testing of a new optimization algorithm , the improvement of a rapid hull form...under the 2012 Naval Research Enterprise Intern Program. 15. SUBJECT TERMS hydrodynamic, hull form, generation, optimization, algorithm

  14. Visual Literacy Skills of Students in College-Level Biology: Learning Outcomes Following Digital or Hand-Drawing Activities

    ERIC Educational Resources Information Center

    Bell, Justine C.

    2014-01-01

    To test the claim that digital learning tools enhance the acquisition of visual literacy in this generation of biology students, a learning intervention was carried out with 33 students enrolled in an introductory college biology course. This study compared learning outcomes following two types of learning tools: a traditional drawing activity, or…

  15. A Design Tool for Matching UAV Propeller and Power Plant Performance

    NASA Astrophysics Data System (ADS)

    Mangio, Arion L.

    A large body of knowledge is available for matching propellers to engines for large propeller driven aircraft. Small UAV's and model airplanes operate at much lower Reynolds numbers and use fixed pitch propellers so the information for large aircraft is not directly applicable. A design tool is needed that takes into account Reynolds number effects, allows for gear reduction, and the selection of a propeller optimized for the airframe. The tool developed in this thesis does this using propeller performance data generated from vortex theory or wind tunnel experiments and combines that data with an engine power curve. The thrust, steady state power, RPM, and tip Mach number vs. velocity curves are generated. The Reynolds number vs. non dimensional radial station at an operating point is also found. The tool is then used to design a geared power plant for the SAE Aero Design competition. To measure the power plant performance, a purpose built engine test stand was built. The characteristics of the engine test stand are also presented. The engine test stand was then used to characterize the geared power plant. The power plant uses a 26x16 propeller, 100/13 gear ratio, and an LRP 0.30 cubic inch engine turning at 28,000 RPM and producing 2.2 HP. Lastly, the measured power plant performance is presented. An important result is that 17 lbf of static thrust is produced.

  16. Modeling of prepregs during automated draping sequences

    NASA Astrophysics Data System (ADS)

    Krogh, Christian; Glud, Jens A.; Jakobsen, Johnny

    2017-10-01

    The behavior of wowen prepreg fabric during automated draping sequences is investigated. A drape tool under development with an arrangement of grippers facilitates the placement of a woven prepreg fabric in a mold. It is essential that the draped configuration is free from wrinkles and other defects. The present study aims at setting up a virtual draping framework capable of modeling the draping process from the initial flat fabric to the final double curved shape and aims at assisting the development of an automated drape tool. The virtual draping framework consists of a kinematic mapping algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping sequence. The model is based on a transient finite element analysis with the material's constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping framework shows a good potential for obtaining a better understanding of the drape process and guide the development of the drape tool. However, results obtained from using the framework on a simple test case indicate that the generation of draping sequences is non-trivial.

  17. High productivity machining of holes in Inconel 718 with SiAlON tools

    NASA Astrophysics Data System (ADS)

    Agirreurreta, Aitor Arruti; Pelegay, Jose Angel; Arrazola, Pedro Jose; Ørskov, Klaus Bonde

    2016-10-01

    Inconel 718 is often employed in aerospace engines and power generation turbines. Numerous researches have proven the enhanced productivity when turning with ceramic tools compared to carbide ones, however there is considerably less information with regard to milling. Moreover, no knowledge has been published about machining holes with this type of tools. Additional research on different machining techniques, like for instance circular ramping, is critical to expand the productivity improvements that ceramics can offer. In this a 3D model of the machining and a number of experiments with SiAlON round inserts have been carried out in order to evaluate the effect of the cutting speed and pitch on the tool wear and chip generation. The results of this analysis show that three different types of chips are generated and also that there are three potential wear zones. Top slice wear is identified as the most critical wear type followed by the notch wear as a secondary wear mechanism. Flank wear and adhesion are also found in most of the tests.

  18. Science and Technology Review, January-February 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Table of contents: accelerators at Livermore; the B-Factory and the Big Bang; assessing exposure to radiation; next generation of computer storage; and a powerful new tool to detect clandestine nuclear tests.

  19. Honeywell Technical Order Transfer Tests.

    DTIC Science & Technology

    1987-06-12

    of simple corrections, a reasonable reproduction of the original could be generated. The quality was not good enough for a production environment. Lack of automated quality control (AQC) tools could account for the errors.

  20. Introduction of the TEAM-HF Costing Tool: A User-Friendly Spreadsheet Program to Estimate Costs of Providing Patient-Centered Interventions

    PubMed Central

    Reed, Shelby D.; Li, Yanhong; Kamble, Shital; Polsky, Daniel; Graham, Felicia L.; Bowers, Margaret T.; Samsa, Gregory P.; Paul, Sara; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara J.

    2011-01-01

    Background Patient-centered health care interventions, such as heart failure disease management programs, are under increasing pressure to demonstrate good value. Variability in costing methods and assumptions in economic evaluations of such interventions limit the comparability of cost estimates across studies. Valid cost estimation is critical to conducting economic evaluations and for program budgeting and reimbursement negotiations. Methods and Results Using sound economic principles, we developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Costing Tool, a spreadsheet program that can be used by researchers or health care managers to systematically generate cost estimates for economic evaluations and to inform budgetary decisions. The tool guides users on data collection and cost assignment for associated personnel, facilities, equipment, supplies, patient incentives, miscellaneous items, and start-up activities. The tool generates estimates of total program costs, cost per patient, and cost per week and presents results using both standardized and customized unit costs for side-by-side comparisons. Results from pilot testing indicated that the tool was well-formatted, easy to use, and followed a logical order. Cost estimates of a 12-week exercise training program in patients with heart failure were generated with the costing tool and were found to be consistent with estimates published in a recent study. Conclusions The TEAM-HF Costing Tool could prove to be a valuable resource for researchers and health care managers to generate comprehensive cost estimates of patient-centered interventions in heart failure or other conditions for conducting high-quality economic evaluations and making well-informed health care management decisions. PMID:22147884

  1. Test tools and test data for the future EUMETSAT EPS-SG platform

    NASA Astrophysics Data System (ADS)

    Khlystova, Iryna; Sunda, Michela

    2017-04-01

    The EUMETSAT Polar System - Second Generation (EPS-SG) represents Europe's contribution to the future Joint Polar System (JPS), which is planned to be established together with the National Oceanic and Atmospheric Administration (NOAA) of the United States, following on from the Initial Joint Polar System (IJPS). Due to its global coverage and the variety of passive and active sensors on the EPS-SG platform, a significant positive impact on Numerical Weather Prediction (NWP) can be expected for all forecasts based on NWP in the 2020-2040 time frame. It will increase direct socio-economic benefits to Member States and leverage additional benefits through its integration into the JPS and cooperation in the context of CGMS and WMO. For the EUMETSAT will develop the EPS-SG overall system of satellites and the Overall Ground Segment (OGS) and be responsible for the Payload Data Acquisition and Processing (PDAP) system. This will include all the functionality dedicated to the L0, L1 and L2 Operational Processor, for generation of the near-real time L1 and L2 mission central products. Also the European Space Agency will develop the EPS-SG satellites and a number of instruments, with CNES and DLR playing a key role. The general processing chain should be in place and be extensively tested before the first data set is sent from the space platform to the ground. For this, numerous test tools, such as satellite data simulators (IDS) and processors prototypes (GPPs and IPPs) need to be developed and operated before the launch of the satellites. EUMETSAT cooperated with several European agencies in order to provide all the testing items in time. Here, we present the insight into the EPS-SG the logic of the test tools for the generation of the test data and provide insights into the modern space-based mission planning and preparation activities.

  2. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  3. IVisTMSA: Interactive Visual Tools for Multiple Sequence Alignments.

    PubMed

    Pervez, Muhammad Tariq; Babar, Masroor Ellahi; Nadeem, Asif; Aslam, Naeem; Naveed, Nasir; Ahmad, Sarfraz; Muhammad, Shah; Qadri, Salman; Shahid, Muhammad; Hussain, Tanveer; Javed, Maryam

    2015-01-01

    IVisTMSA is a software package of seven graphical tools for multiple sequence alignments. MSApad is an editing and analysis tool. It can load 409% more data than Jalview, STRAP, CINEMA, and Base-by-Base. MSA comparator allows the user to visualize consistent and inconsistent regions of reference and test alignments of more than 21-MB size in less than 12 seconds. MSA comparator is 5,200% efficient and more than 40% efficient as compared to BALiBASE c program and FastSP, respectively. MSA reconstruction tool provides graphical user interfaces for four popular aligners and allows the user to load several sequence files at a time. FASTA generator converts seven formats of alignments of unlimited size into FASTA format in a few seconds. MSA ID calculator calculates identity matrix of more than 11,000 sequences with a sequence length of 2,696 base pairs in less than 100 seconds. Tree and Distance Matrix calculation tools generate phylogenetic tree and distance matrix, respectively, using neighbor joining% identity and BLOSUM 62 matrix.

  4. 78 FR 33961 - National Oceans Month, 2013

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... generations to come. Rising to meet that test means addressing threats like overfishing, pollution, and... the tools they need to thrive while protecting the long-term health of our marine ecosystems. Let us...

  5. Generation of non-genomic oligonucleotide tag sequences for RNA template-specific PCR

    PubMed Central

    Pinto, Fernando Lopes; Svensson, Håkan; Lindblad, Peter

    2006-01-01

    Background In order to overcome genomic DNA contamination in transcriptional studies, reverse template-specific polymerase chain reaction, a modification of reverse transcriptase polymerase chain reaction, is used. The possibility of using tags whose sequences are not found in the genome further improves reverse specific polymerase chain reaction experiments. Given the absence of software available to produce genome suitable tags, a simple tool to fulfill such need was developed. Results The program was developed in Perl, with separate use of the basic local alignment search tool, making the tool platform independent (known to run on Windows XP and Linux). In order to test the performance of the generated tags, several molecular experiments were performed. The results show that Tagenerator is capable of generating tags with good priming properties, which will deliberately not result in PCR amplification of genomic DNA. Conclusion The program Tagenerator is capable of generating tag sequences that combine genome absence with good priming properties for RT-PCR based experiments, circumventing the effects of genomic DNA contamination in an RNA sample. PMID:16820068

  6. Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-02-01

    New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less

  7. Real-time simulation of a Doubly-Fed Induction Generator based wind power system on eMEGASimRTM Real-Time Digital Simulator

    NASA Astrophysics Data System (ADS)

    Boakye-Boateng, Nasir Abdulai

    The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.

  8. Designing Test Suites for Software Interactions Testing

    DTIC Science & Technology

    2004-01-01

    the annual cost of insufficient software testing methods and tools in the United States is between 22.2 to 59.5 billion US dollars [13, 14]. This study...10 (2004), 1–29. [21] Cheng, C., Dumitrescu, A., and Schroeder , P. Generating small com- binatorial test suites to cover input-output relationships... Proceedings of the Conference on the Future of Software Engineering (May 2000), pp. 61 – 72. [51] Hartman, A. Software and hardware testing using

  9. Multilayer composition coatings for cutting tools: formation and performance properties

    NASA Astrophysics Data System (ADS)

    Tabakov, Vladimir P.; Vereschaka, Anatoly S.; Vereschaka, Alexey A.

    2018-03-01

    The paper considers the concept of a multi-layer architecture of the coating in which each layer has a predetermined functionality. Latest generation of coatings with multi-layered architecture for cutting tools secure a dual nature of the coating, in which coatings should not only improve the mechanical and physical characteristics of the cutting tool material, but also reduce the thermo-mechanical effect on the cutting tool determining wear intensity. Here are presented the results of the development of combined methods of forming multi-layer coatings with improved properties. Combined method of forming coatings using a pulsed laser allowed reducing excessively high levels of compressive residual stress and increasing micro hardness of the multilayered coatings. The results in testing coated HSS tools showed that the use of additional pulse of laser processing increases tool life up to 3 times. Using filtered cathodic vacuum arc deposition for the generation of multilayer coatings based on TiAlN compound has increased the wear-resistance of carbide tools by 2 fold compared with tool life of cutting tool with commercial TiN coatings. The aim of this study was to develop an innovative methodological approach to the deposition of multilayer coatings for cutting tools with functional architectural selection, properties and parameters of the coating based on sound knowledge of coating failure in machining process.

  10. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  11. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)(Bled Slovenia)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  12. A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.

  13. Development and psychometric testing of the Carter Assessment of Critical Thinking in Midwifery (Preceptor/Mentor version).

    PubMed

    Carter, Amanda G; Creedy, Debra K; Sidebotham, Mary

    2016-03-01

    develop and test a tool designed for use by preceptors/mentors to assess undergraduate midwifery students׳ critical thinking in practice. a descriptive cohort design was used. participants worked in a range of maternity settings in Queensland, Australia. 106 midwifery clinicians who had acted in the role of preceptor for undergraduate midwifery students. this study followed a staged model for tool development recommended by DeVellis (2012). This included generation of items, content validity testing through mapping of draft items to critical thinking concepts and expert review, administration of items to a convenience sample of preceptors, and psychometric testing. A 24 item tool titled the XXXX Assessment of Critical Thinking in Midwifery (CACTiM) was completed by registered midwives in relation to students they had recently preceptored in the clinical environment. ratings by experts revealed a content validity index score of 0.97, representing good content validity. An evaluation of construct validity through factor analysis generated three factors: 'partnership in practice', 'reflection on practice' and 'practice improvements'. The scale demonstrated good internal reliability with a Cronbach alpha coefficient of 0.97. The mean total score for the CACTiM scale was 116.77 (SD=16.68) with a range of 60-144. Total and subscale scores correlated significantly. the CACTiM (Preceptor/Mentor version) was found to be a valid and reliable tool for use by preceptors to assess critical thinking in undergraduate midwifery students. given the importance of critical thinking skills for midwifery practice, mapping and assessing critical thinking development in students׳ practice across an undergraduate programme is vital. The CACTiM (Preceptor/Mentor version) has utility for clinical education, research and practice. The tool can inform and guide preceptors׳ assessment of students׳ critical thinking in practice. The availability of a reliable and valid tool can be used to research the development of critical thinking in practice. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  14. Using OpenTarget to Generate Potential Countermeasures for Long-Term Space Exposure from Data Available on GeneLab

    NASA Technical Reports Server (NTRS)

    Beheshti, Afshin

    2018-01-01

    GeneLab as a general tool for the scientific community; Utilizing GeneLab datasets to generate hypothesis and determining potential biological targets against health risks due to long-term space missions; How can OpenTarget be used to discover novel drugs to test as countermeasures that can be utilized by astronauts.

  15. ALOG: A spreadsheet-based program for generating artificial logs

    Treesearch

    Matthew F. Winn; Randolph H. Wynne; Philip A. Araman

    2004-01-01

    Log sawing simulation computer programs can be valuable tools for training sawyers as well as for testing different sawing patterns. Most available simulation programs rely on databases from which to draw logs and can be very costly and time-consuming to develop. ALOG (Artificial LOg Generator) is a Microsoft Excel®-based computer program that was developed to...

  16. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease.

    PubMed

    Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk

    2016-06-01

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.

  17. Tools and technologies needed for conducting planetary field geology while on EVA: Insights from the 2010 Desert RATS geologist crewmembers

    NASA Astrophysics Data System (ADS)

    Young, Kelsey; Hurtado, José M.; Bleacher, Jacob E.; Brent Garry, W.; Bleisath, Scott; Buffington, Jesse; Rice, James W.

    2013-10-01

    The tools used by crews while on extravehicular activity during future missions to other bodies in the Solar System will be a combination of traditional geologic field tools (e.g. hammers, rakes, sample bags) and state-of-the-art technologies (e.g. high definition cameras, digital situational awareness devices, and new geologic tools). In the 2010 Desert Research and Technology Studies (RATS) field test, four crews, each consisting of an astronaut/engineer and field geologist, tested and evaluated various technologies during two weeks of simulated spacewalks in the San Francisco volcanic field, Arizona. These tools consisted of both Apollo-style field geology tools and modern technological equipment not used during the six Apollo lunar landings. The underlying exploration driver for this field test was to establish the protocols and technology needed for an eventual manned mission to an asteroid, the Moon, or Mars. The authors of this paper represent Desert RATS geologist crewmembers as well as two engineers who worked on technology development. Here we present an evaluation and assessment of these tools and technologies based on our first-hand experience of using them during the analog field test. We intend this to serve as a basis for continued development of technologies and protocols used for conducting planetary field geology as the Solar System exploration community moves forward into the next generation of planetary surface exploration.

  18. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    NASA Astrophysics Data System (ADS)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  19. Hierarchical Testing with Automated Document Generation for Amanzi, ASCEM's Subsurface Flow and Reactive Transport Simulator

    NASA Astrophysics Data System (ADS)

    Moulton, J. D.; Steefel, C. I.; Yabusaki, S.; Castleton, K.; Scheibe, T. D.; Keating, E. H.; Freedman, V. L.

    2013-12-01

    The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments use a graded and iterative approach, beginning with simplified highly abstracted models, and adding geometric and geologic complexity as understanding is gained. To build confidence in this assessment capability, extensive testing of the underlying tools is needed. Since the tools themselves, such as the subsurface flow and reactive-transport simulator, Amanzi, are under active development, testing must be both hierarchical and highly automated. In this presentation we show how we have met these requirements, by leveraging the python-based open-source documentation system called Sphinx with several other open-source tools. Sphinx builds on the reStructured text tool docutils, with important extensions that include high-quality formatting of equations, and integrated plotting through matplotlib. This allows the documentation, as well as the input files for tests, benchmark and tutorial problems, to be maintained with the source code under a version control system. In addition, it enables developers to build documentation in several different formats (e.g., html and pdf) from a single source. We will highlight these features, and discuss important benefits of this approach for Amanzi. In addition, we'll show that some of ASCEM's other tools, such as the sampling provided by the Uncertainty Quantification toolset, are naturally leveraged to enable more comprehensive testing. Finally, we will highlight the integration of this hiearchical testing and documentation framework with our build system and tools (CMake, CTest, and CDash).

  20. Hole expansion test of third generation steels

    NASA Astrophysics Data System (ADS)

    Agirre, Julen; Mendiguren, Joseba; Galdos, Lander; de Argandoña, Eneko Sáenz

    2017-10-01

    The trend towards the implementation of new materials in the chassis of the automobiles is considerably making more complex the manufacturing of the components that built it up. In this scenario materials with higher strengths and lower formabilities are daily faced by tool makers and component producers what reduces the process windows and makes the forming processes to be in the limits of the materials. One of the concerns that tool makers must face during the definition of the tools is the expansion ratios that the holes in the sheet may reach before producing a breakage due to the stretching of the material (also known as edge cracks). For the characterization of such limits, a standard test, the hole expansion test, can be applied so that the limits of the material are known. At the present study, hole expansion tests of a third generation steel, Fortiform1050 with a thickness of 1.2 millimeters have been carried out and compared them to a mild steel, DX54D with a thickness of 0.6 millimeters. A comparison for each material in terms of technology used to punch the hole, mechanical punching vs laser cutting has also been conducted. In addition, the measurement technique (online measurement vs offline measurement) followed in the Hole Expansion Ratio (HER) identification has also been analyzed. Finally, differences between both materials and techniques are presented.

  1. A novel diamond micro-/nano-machining process for the generation of hierarchical micro-/nano-structures

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; To, Suet; Ehmann, Kornel F.; Xiao, Gaobo; Zhu, Wule

    2016-03-01

    A new mechanical micro-/nano-machining process that combines rotary spatial vibrations (RSV) of a diamond tool and the servo motions of the workpiece is proposed and applied for the generation of multi-tier hierarchical micro-/nano-structures. In the proposed micro-/nano-machining system, the servo motion, as the primary cutting motion generated by a slow-tool-servo, is adopted for the fine generation of the primary surfaces with complex shapes. The RSV, as the tertiary cutting operation, is superimposed on the secondary fundamental rotary cutting motion to construct secondary nano-structures on the primary surface. Since the RSV system generally works at much higher frequencies and motion resolution than the primary and secondary motions, it leads to an inherent hierarchical cutting architecture. To investigate the machining performance, complex micro-/nano-structures were generated and explored by both numerical simulations and actual cutting tests. Rotary vibrations of the diamond tool at a constant rotational distance offer an inherent constant cutting velocity, leading to the ability for the generation of homogeneous micro-/nano-structures with fixed amplitudes and frequencies of the vibrations, even over large-scale surfaces. Furthermore, by deliberately combining the non-resonant three-axial vibrations and the servo motion, the generation of a variety of micro-/nano-structures with complex shapes and with flexibly tunable feature sizes can be achieved.

  2. Analytical Tools for Space Suit Design

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay

    2011-01-01

    As indicated by the implementation of multiple small project teams within the agency, NASA is adopting a lean approach to hardware development that emphasizes quick product realization and rapid response to shifting program and agency goals. Over the past two decades, space suit design has been evolutionary in approach with emphasis on building prototypes then testing with the largest practical range of subjects possible. The results of these efforts show continuous improvement but make scaled design and performance predictions almost impossible with limited budgets and little time. Thus, in an effort to start changing the way NASA approaches space suit design and analysis, the Advanced Space Suit group has initiated the development of an integrated design and analysis tool. It is a multi-year-if not decadal-development effort that, when fully implemented, is envisioned to generate analysis of any given space suit architecture or, conversely, predictions of ideal space suit architectures given specific mission parameters. The master tool will exchange information to and from a set of five sub-tool groups in order to generate the desired output. The basic functions of each sub-tool group, the initial relationships between the sub-tools, and a comparison to state of the art software and tools are discussed.

  3. MLM Builder: An Integrated Suite for Development and Maintenance of Arden Syntax Medical Logic Modules

    PubMed Central

    Sailors, R. Matthew

    1997-01-01

    The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.

  4. Moles: Tool-Assisted Environment Isolation with Closures

    NASA Astrophysics Data System (ADS)

    de Halleux, Jonathan; Tillmann, Nikolai

    Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.

  5. Next Generation of Leaching Tests

    EPA Science Inventory

    A corresponding abstract has been cleared for this presentation. The four methods comprising the Leaching Environmental Assessment Framework are described along with the tools to support implementation of the more rigorous and accurate source terms that are developed using LEAF ...

  6. Tools and techniques for estimating high intensity RF effects

    NASA Astrophysics Data System (ADS)

    Zacharias, Richard L.; Pennock, Steve T.; Poggio, Andrew J.; Ray, Scott L.

    1992-01-01

    Tools and techniques for estimating and measuring coupling and component disturbance for avionics and electronic controls are described. A finite-difference-time-domain (FD-TD) modeling code, TSAR, used to predict coupling is described. This code can quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code are described. Facilities and techniques for making low-power coupling measurements and for making direct injection test measurements of device disturbance are also described. Some scaling laws for coupling and device effects are presented. A method for extrapolating these low-power test results to high-power full-system effects are presented.

  7. Testing a New Generation: Implementing Clickers as an Extension Data Collection Tool

    ERIC Educational Resources Information Center

    Parmer, Sondra M.; Parmer, Greg; Struempler, Barb

    2012-01-01

    Using clickers to gauge student understanding in large classrooms is well documented. Less well known is the effectiveness of using clickers with youth for test taking in large-scale Extension programs. This article describes the benefits and challenges of collecting evaluation data using clickers with a third-grade population participating in a…

  8. A Vignette (User's Guide) for “An R Package for Statistical ...

    EPA Pesticide Factsheets

    StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.

  9. CEQer: a graphical tool for copy number and allelic imbalance detection from whole-exome sequencing data.

    PubMed

    Piazza, Rocco; Magistroni, Vera; Pirola, Alessandra; Redaelli, Sara; Spinelli, Roberta; Redaelli, Serena; Galbiati, Marta; Valletta, Simona; Giudici, Giovanni; Cazzaniga, Giovanni; Gambacorti-Passerini, Carlo

    2013-01-01

    Copy number alterations (CNA) are common events occurring in leukaemias and solid tumors. Comparative Genome Hybridization (CGH) is actually the gold standard technique to analyze CNAs; however, CGH analysis requires dedicated instruments and is able to perform only low resolution Loss of Heterozygosity (LOH) analyses. Here we present CEQer (Comparative Exome Quantification analyzer), a new graphical, event-driven tool for CNA/allelic-imbalance (AI) coupled analysis of exome sequencing data. By using case-control matched exome data, CEQer performs a comparative digital exonic quantification to generate CNA data and couples this information with exome-wide LOH and allelic imbalance detection. This data is used to build mixed statistical/heuristic models allowing the identification of CNA/AI events. To test our tool, we initially used in silico generated data, then we performed whole-exome sequencing from 20 leukemic specimens and corresponding matched controls and we analyzed the results using CEQer. Taken globally, these analyses showed that the combined use of comparative digital exon quantification and LOH/AI allows generating very accurate CNA data. Therefore, we propose CEQer as an efficient, robust and user-friendly graphical tool for the identification of CNA/AI in the context of whole-exome sequencing data.

  10. Integrated verification and testing system (IVTS) for HAL/S programs

    NASA Technical Reports Server (NTRS)

    Senn, E. H.; Ames, K. R.; Smith, K. A.

    1983-01-01

    The IVTS is a large software system designed to support user-controlled verification analysis and testing activities for programs written in the HAL/S language. The system is composed of a user interface and user command language, analysis tools and an organized data base of host system files. The analysis tools are of four major types: (1) static analysis, (2) symbolic execution, (3) dynamic analysis (testing), and (4) documentation enhancement. The IVTS requires a split HAL/S compiler, divided at the natural separation point between the parser/lexical analyzer phase and the target machine code generator phase. The IVTS uses the internal program form (HALMAT) between these two phases as primary input for the analysis tools. The dynamic analysis component requires some way to 'execute' the object HAL/S program. The execution medium may be an interpretive simulation or an actual host or target machine.

  11. User Manual for the PROTEUS Mesh Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Micheal A.; Shemon, Emily R

    2016-09-19

    PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation.more » There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less

  12. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  13. Managing design excellence tools during the development of new orthopaedic implants.

    PubMed

    Défossez, Henri J P; Serhan, Hassan

    2013-11-01

    Design excellence (DEX) tools have been widely used for years in some industries for their potential to facilitate new product development. The medical sector, targeted by cost pressures, has therefore started adopting them. Numerous tools are available; however only appropriate deployment during the new product development stages can optimize the overall process. The primary study objectives were to describe generic tools and illustrate their implementation and management during the development of new orthopaedic implants, and compile a reference package. Secondary objectives were to present the DEX tool investment costs and savings, since the method can require significant resources for which companies must carefully plan. The publicly available DEX method "Define Measure Analyze Design Verify Validate" was adopted and implemented during the development of a new spinal implant. Several tools proved most successful at developing the correct product, addressing clinical needs, and increasing market penetration potential, while reducing design iterations and manufacturing validations. Cost analysis and Pugh Matrix coupled with multi generation planning enabled developing a strong rationale to activate the project, set the vision and goals. improved risk management and product map established a robust technical verification-validation program. Design of experiments and process quantification facilitated design for manufacturing of critical features, as early as the concept phase. Biomechanical testing with analysis of variance provided a validation model with a recognized statistical performance baseline. Within those tools, only certain ones required minimum resources (i.e., business case, multi generational plan, project value proposition, Pugh Matrix, critical To quality process validation techniques), while others required significant investments (i.e., voice of customer, product usage map, improved risk management, design of experiments, biomechanical testing techniques). All used techniques provided savings exceeding investment costs. Some other tools were considered and found less relevant. A matrix summarized the investment costs and generated estimated savings. Globally, all companies can benefit from using DEX by smartly selecting and estimating those tools with best return on investment at the start of the project. For this, a good understanding of the available company resources, background and development strategy are needed. In conclusion, it was possible to illustrate that appropriate management of design excellence tools can greatly facilitate the development of new orthopaedic implant systems.

  14. Integration of Irma tactical scene generator into directed-energy weapon system simulation

    NASA Astrophysics Data System (ADS)

    Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.

    2003-08-01

    Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.

  15. NextGen Operational Improvements: Will they Improve Human Performance

    NASA Technical Reports Server (NTRS)

    Beard, Bettina L.; Johnston, James C.; Holbrook, Jon

    2013-01-01

    Modernization of the National Airspace System depends critically on the development of advanced technology, including cutting-edge automation, controller decision-support tools and integrated on-demand information. The Next Generation Air Transportation System national plan envisions air traffic control tower automation that proposes solutions for seven problems: 1) departure metering, 2) taxi routing, 3) taxi and runway scheduling, 4) departure runway assignments, 5) departure flow management, 6) integrated arrival and departure scheduling and 7) runway configuration management. Government, academia and industry are simultaneously pursuing the development of these tools. For each tool, the development process typically begins by assessing its potential benefits, and then progresses to designing preliminary versions of the tool, followed by testing the tool's strengths and weaknesses using computational modeling, human-in-the-loop simulation and/or field tests. We compiled the literature, evaluated the methodological rigor of the studies and served as referee for partisan conclusions that were sometimes overly optimistic. Here we provide the results of this review.

  16. Experimental Evaluation of Verification and Validation Tools on Martian Rover Software

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Giannakopoulou, Dimitra; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareani, Corina; Venet, Arnaud; Visser, Willem; Washington, Rich

    2003-01-01

    We report on a study to determine the maturity of different verification and validation technologies (V&V) on a representative example of NASA flight software. The study consisted of a controlled experiment where three technologies (static analysis, runtime analysis and model checking) were compared to traditional testing with respect to their ability to find seeded errors in a prototype Mars Rover. What makes this study unique is that it is the first (to the best of our knowledge) to do a controlled experiment to compare formal methods based tools to testing on a realistic industrial-size example where the emphasis was on collecting as much data on the performance of the tools and the participants as possible. The paper includes a description of the Rover code that was analyzed, the tools used as well as a detailed description of the experimental setup and the results. Due to the complexity of setting up the experiment, our results can not be generalized, but we believe it can still serve as a valuable point of reference for future studies of this kind. It did confirm the belief we had that advanced tools can outperform testing when trying to locate concurrency errors. Furthermore the results of the experiment inspired a novel framework for testing the next generation of the Rover.

  17. A taxonomy and discussion of software attack technologies

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2005-03-01

    Software is a complex thing. It is not an engineering artifact that springs forth from a design by simply following software coding rules; creativity and the human element are at the heart of the process. Software development is part science, part art, and part craft. Design, architecture, and coding are equally important activities and in each of these activities, errors may be introduced that lead to security vulnerabilities. Therefore, inevitably, errors enter into the code. Some of these errors are discovered during testing; however, some are not. The best way to find security errors, whether they are introduced as part of the architecture development effort or coding effort, is to automate the security testing process to the maximum extent possible and add this class of tools to the tools available, which aids in the compilation process, testing, test analysis, and software distribution. Recent technological advances, improvements in computer-generated forces (CGFs), and results in research in information assurance and software protection indicate that we can build a semi-intelligent software security testing tool. However, before we can undertake the security testing automation effort, we must understand the scope of the required testing, the security failures that need to be uncovered during testing, and the characteristics of the failures. Therefore, we undertook the research reported in the paper, which is the development of a taxonomy and a discussion of software attacks generated from the point of view of the security tester with the goal of using the taxonomy to guide the development of the knowledge base for the automated security testing tool. The representation for attacks and threat cases yielded by this research captures the strategies, tactics, and other considerations that come into play during the planning and execution of attacks upon application software. The paper is organized as follows. Section one contains an introduction to our research and a discussion of the motivation for our work. Section two contains a presents our taxonomy of software attacks and a discussion of the strategies employed and general weaknesses exploited for each attack. Section three contains a summary and suggestions for further research.

  18. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  19. Multiple comparison analysis testing in ANOVA.

    PubMed

    McHugh, Mary L

    2011-01-01

    The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.

  20. Recent Developments in OVERGRID, OVERFLOW-2 and Chimera Grid Tools Scripts

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    OVERGRID and OVERFLOW-2 feature easy to use multiple-body dynamics. The new features of OVERGRID include a preliminary chemistry interface, standard atmosphere and mass properties calculators, a simple unsteady solution viewer, and a debris tracking interface. Script library development in Chimera Grid Tools has applications in turbopump grid generation. This viewgraph presentation profiles multiple component dynamics, validation test cases for a sphere, cylinder, and oscillating airfoil, and debris analysis.

  1. HullBUG Technology Development for Underwater Hull Cleaning

    DTIC Science & Technology

    2014-01-15

    Grooming Tool would be generated. These drawings would be vended out to approved vendors for quoting. Quotes would be obtained for different levels...determined, parts would be vended out and manufactured. These parts would then be assembled at SRC. A test program would then follow to fully...identified all machined parts as well as all purchased hardware Top Assembly Based on total cost, a decision was made to make 1 grooming tool and 1

  2. Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools

    NASA Technical Reports Server (NTRS)

    Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory

    2013-01-01

    Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.

  3. Designing a Virtual Item Bank Based on the Techniques of Image Processing

    ERIC Educational Resources Information Center

    Liao, Wen-Wei; Ho, Rong-Guey

    2011-01-01

    One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…

  4. Next generation fuel irradiation capability in the High Flux Reactor Petten

    NASA Astrophysics Data System (ADS)

    Fütterer, Michael A.; D'Agata, Elio; Laurie, Mathias; Marmier, Alain; Scaffidi-Argentina, Francesco; Raison, Philippe; Bakker, Klaas; de Groot, Sander; Klaassen, Frodo

    2009-07-01

    This paper describes selected equipment and expertise on fuel irradiation testing at the High Flux Reactor (HFR) in Petten, The Netherlands. The reactor went critical in 1961 and holds an operating license up to at least 2015. While HFR has initially focused on Light Water Reactor fuel and materials, it also played a decisive role since the 1970s in the German High Temperature Reactor (HTR) development program. A variety of tests related to fast reactor development in Europe were carried out for next generation fuel and materials, in particular for Very High Temperature Reactor (V/HTR) fuel, fuel for closed fuel cycles (U-Pu and Th-U fuel cycle) and transmutation, as well as for other innovative fuel types. The HFR constitutes a significant European infrastructure tool for the development of next generation reactors. Experimental facilities addressed include V/HTR fuel tests, a coated particle irradiation rig, and tests on fast reactor, transmutation and thorium fuel. The rationales for these tests are given, results are provided and further work is outlined.

  5. Diagnostic tools in ocular allergy.

    PubMed

    Leonardi, A; Doan, S; Fauquert, J L; Bozkurt, B; Allegri, P; Marmouz, F; Rondon, C; Jedrzejczak, M; Hellings, P; Delgado, L; Calder, V

    2017-10-01

    Ocular allergy (OA) includes a group of common and less frequent hypersensitivity disorders frequently misdiagnosed and not properly managed. The diagnosis of OA is usually based on clinical history and signs and symptoms, with the support of in vivo and in vitro tests when identification of the specific allergen is required. To date, no specific test is available for the diagnosis of the whole spectrum of the different forms of OA. The lack of recommendations on diagnosis of OA is considered a medical need not only for allergists but also for ophthalmologists. This position paper aims to provide a comprehensive overview of the currently available tools for diagnosing OA to promote a common nomenclature and procedures to be used by different specialists. Questionnaires, sign and symptom grading scales, tests, and potential biomarkers for OA are reviewed. We also identified several unmet needs in the diagnostic tools to generate interest, increase understanding, and inspire further investigations. Tools, recommendations, and algorithms for the diagnosis of OA are proposed for use by both allergists and ophthalmologists. Several unmet needs in the diagnostic tools should be further improved by specific clinical research in OA. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  6. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  7. Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool

    NASA Astrophysics Data System (ADS)

    Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong

    2016-06-01

    The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.

  8. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  9. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1976-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program assembly control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools are described, as well as the program test plans and their implementation on the various simulators. Failure effects analysis and the creation of special failure generating software for testing purposes are described.

  10. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies

    NASA Astrophysics Data System (ADS)

    Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert

    2001-09-01

    Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation test using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. The current version of CAMEO-SIM and how its images are used for camouflage assessment is described. The verification and validation tests undertaken are discussed. In addition, example images will be used to demonstrate the significance of different effects, such as spectral rendering and shadows. Planned developments of CAMEO-SIM are also outlined.

  11. User Manual for the PROTEUS Mesh Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Micheal A.; Shemon, Emily R.

    2015-06-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a givenmore » mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less

  12. Requirements Document for Development of a Livermore Tomography Tools Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seetho, I. M.

    In this document, we outline an exercise performed at LLNL to evaluate the user interface deficits of a LLNL-developed CT reconstruction software package, Livermore Tomography Tools (LTT). We observe that a difficult-to-use command line interface and the lack of support functions compound to generate a bottleneck in the CT reconstruction process when input parameters to key functions are not well known. Through the exercise of systems engineering best practices, we generate key performance parameters for a LTT interface refresh, and specify a combination of back-end (“test-mode” functions) and front-end (graphical user interface visualization and command scripting tools) solutions to LTT’smore » poor user interface that aim to mitigate issues and lower costs associated with CT reconstruction using LTT. Key functional and non-functional requirements and risk mitigation strategies for the solution are outlined and discussed.« less

  13. Development of a comprehensive software engineering environment

    NASA Technical Reports Server (NTRS)

    Hartrum, Thomas C.; Lamont, Gary B.

    1987-01-01

    The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.

  14. Is there a “net generation” in veterinary medicine? A comparative study on the use of the Internet and Web 2.0 by students and the veterinary profession

    PubMed Central

    Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R.; Ehlers, Jan P.

    2013-01-01

    Introduction: Informal and formal lifelong learning is essential at university and in the workplace. Apart from classical learning techniques, Web 2.0 tools can be used. It is controversial whether there is a so-called net generation amongst people under 30. Aims: To test the hypothesis that a net generation among students and young veterinarians exists. Methods: An online survey of students and veterinarians was conducted in the German-speaking countries which was advertised via online media and traditional print media. Results: 1780 people took part in the survey. Students and veterinarians have different usage patterns regarding social networks (91.9% vs. 69%) and IM (55.9% vs. 24.5%). All tools were predominantly used passively and in private, to a lesser extent also professionally and for studying. Outlook: The use of Web 2.0 tools is useful, however, teaching information and media skills, preparing codes of conduct for the internet and verification of user generated content is essential. PMID:23467682

  15. A strategic informatics approach to autoverification.

    PubMed

    Jones, Jay B

    2013-03-01

    Autoverification is rapidly expanding with increased functionality provided by middleware tools. It is imperative that autoverification of laboratory test results be viewed as a process evolving into a broader, more sophisticated form of decision support, which will require strategic planning to form a foundational tool set for the laboratory. One must strategically plan to expand autoverification in the future to include a vision of instrument-generated order interfaces, reflexive testing, and interoperability with other information systems. It is hoped that the observations, examples, and opinions expressed in this article will stimulate such short-term and long-term strategic planning. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Testing large flats with computer generated holograms

    NASA Astrophysics Data System (ADS)

    Pariani, Giorgio; Tresoldi, Daniela; Spanò, Paolo; Bianco, Andrea

    2012-09-01

    We describe the optical test of a large flat based on a spherical mirror and a dedicated CGH. The spherical mirror, which can be accurately manufactured and tested in absolute way, allows to obtain a quasi collimated light beam, and the hologram performs the residual wavefront correction. Alignment tools for the spherical mirror and the hologram itself are encoded in the CGH. Sensitivity to fabrication errors and alignment has been evaluated. Tests to verify the effectiveness of our approach are now under execution.

  17. EGG: Empirical Galaxy Generator

    NASA Astrophysics Data System (ADS)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2018-04-01

    The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

  18. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  19. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  20. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  1. Metastatic melanoma moves on: translational science in the era of personalized medicine.

    PubMed

    Levesque, Mitchell P; Cheng, Phil F; Raaijmakers, Marieke I G; Saltari, Annalisa; Dummer, Reinhard

    2017-03-01

    Progress in understanding and treating metastatic melanoma is the result of decades of basic and translational research as well as the development of better in vitro tools for modeling the disease. Here, we review the latest therapeutic options for metastatic melanoma and the known genetic and non-genetic mechanisms of resistance to these therapies, as well as the in vitro toolbox that has provided the greatest insights into melanoma progression. These include next-generation sequencing technologies and more complex 2D and 3D cell culture models to functionally test the data generated by genomics approaches. The combination of hypothesis generating and hypothesis testing paradigms reviewed here will be the foundation for the next phase of metastatic melanoma therapies in the coming years.

  2. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  3. NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less

  4. Comparison of seven fall risk assessment tools in community-dwelling Korean older women.

    PubMed

    Kim, Taekyoung; Xiong, Shuping

    2017-03-01

    This study aimed to compare seven widely used fall risk assessment tools in terms of validity and practicality, and to provide a guideline for choosing appropriate fall risk assessment tools for elderly Koreans. Sixty community-dwelling Korean older women (30 fallers and 30 matched non-fallers) were evaluated. Performance measures of all tools were compared between the faller and non-faller groups through two sample t-tests. Receiver Operating Characteristic curves were generated with odds ratios for discriminant analysis. Results showed that four tools had significant discriminative power, and the shortened version of Falls Efficacy Scale (SFES) showed excellent discriminant validity, followed by Berg Balance Scale (BBS) with acceptable discriminant validity. The Mini Balance Evaluation System Test and Timed Up and Go, however, had limited discriminant validities. In terms of practicality, SFES was also excellent. These findings suggest that SFES is the most suitable tool for assessing the fall risks of community-dwelling Korean older women, followed by BBS. Practitioner Summary: There is no general guideline on which fall risk assessment tools are suitable for community-dwelling Korean older women. This study compared seven widely used assessment tools in terms of validity and practicality. Results suggested that the short Falls Efficacy Scale is the most suitable tool, followed by Berg Balance Scale.

  5. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  6. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etmektzoglou, A; Mishra, P; Svatos, M

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomesmore » available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly translate research ideas into machine readable scripts without programming knowledge. As an open source initiative, it also enables researcher collaboration on future developments. I am a full time employee at Varian Medical Systems, Palo Alto, California.« less

  7. PyHLA: tests for the association between HLA alleles and diseases.

    PubMed

    Fan, Yanhui; Song, You-Qiang

    2017-02-06

    Recently, several tools have been designed for human leukocyte antigen (HLA) typing using single nucleotide polymorphism (SNP) array and next-generation sequencing (NGS) data. These tools provide high-throughput and cost-effective approaches for identifying HLA types. Therefore, tools for downstream association analysis are highly desirable. Although several tools have been designed for multi-allelic marker association analysis, they were designed only for microsatellite markers and do not scale well with increasing data volumes, or they were designed for large-scale data but provided a limited number of tests. We have developed a Python package called PyHLA, which implements several methods for HLA association analysis, to fill the gap. PyHLA is a tailor-made, easy to use, and flexible tool designed specifically for the association analysis of the HLA types imputed from genome-wide genotyping and NGS data. PyHLA provides functions for association analysis, zygosity tests, and interaction tests between HLA alleles and diseases. Monte Carlo permutation and several methods for multiple testing corrections have also been implemented. PyHLA provides a convenient and powerful tool for HLA analysis. Existing methods have been integrated and desired methods have been added in PyHLA. Furthermore, PyHLA is applicable to small and large sample sizes and can finish the analysis in a timely manner on a personal computer with different platforms. PyHLA is implemented in Python. PyHLA is a free, open source software distributed under the GPLv2 license. The source code, tutorial, and examples are available at https://github.com/felixfan/PyHLA.

  8. CEQer: A Graphical Tool for Copy Number and Allelic Imbalance Detection from Whole-Exome Sequencing Data

    PubMed Central

    Piazza, Rocco; Magistroni, Vera; Pirola, Alessandra; Redaelli, Sara; Spinelli, Roberta; Redaelli, Serena; Galbiati, Marta; Valletta, Simona; Giudici, Giovanni; Cazzaniga, Giovanni; Gambacorti-Passerini, Carlo

    2013-01-01

    Copy number alterations (CNA) are common events occurring in leukaemias and solid tumors. Comparative Genome Hybridization (CGH) is actually the gold standard technique to analyze CNAs; however, CGH analysis requires dedicated instruments and is able to perform only low resolution Loss of Heterozygosity (LOH) analyses. Here we present CEQer (Comparative Exome Quantification analyzer), a new graphical, event-driven tool for CNA/allelic-imbalance (AI) coupled analysis of exome sequencing data. By using case-control matched exome data, CEQer performs a comparative digital exonic quantification to generate CNA data and couples this information with exome-wide LOH and allelic imbalance detection. This data is used to build mixed statistical/heuristic models allowing the identification of CNA/AI events. To test our tool, we initially used in silico generated data, then we performed whole-exome sequencing from 20 leukemic specimens and corresponding matched controls and we analyzed the results using CEQer. Taken globally, these analyses showed that the combined use of comparative digital exon quantification and LOH/AI allows generating very accurate CNA data. Therefore, we propose CEQer as an efficient, robust and user-friendly graphical tool for the identification of CNA/AI in the context of whole-exome sequencing data. PMID:24124457

  9. Investigation of Bearing Fatigue Damage Life Prediction Using Oil Debris Monitoring

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Bolander, Nathan; Haynes, Chris; Toms, Allison M.

    2011-01-01

    Research was performed to determine if a diagnostic tool for detecting fatigue damage of helicopter tapered roller bearings can be used to determine remaining useful life (RUL). The taper roller bearings under study were installed on the tail gearbox (TGB) output shaft of UH- 60M helicopters, removed from the helicopters and subsequently installed in a bearing spall propagation test rig. The diagnostic tool was developed and evaluated experimentally by collecting oil debris data during spall progression tests on four bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of pitting damage. Results from the four bearings tested indicate that measuring the debris generated when a bearing outer race begins to spall can be used to indicate bearing damage progression and remaining bearing life.

  10. TH-C-12A-12: Veritas: An Open Source Tool to Facilitate User Interaction with TrueBeam Developer Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Varian Medical Systems, Palo Alto, CA; Lewis, J

    2014-06-15

    Purpose: To address the challenges of creating delivery trajectories and imaging sequences with TrueBeam Developer Mode, a new open-source graphical XML builder, Veritas, has been developed, tested and made freely available. Veritas eliminates most of the need to understand the underlying schema and write XML scripts, by providing a graphical menu for each control point specifying the state of 30 mechanical/dose axes. All capabilities of Developer Mode are accessible in Veritas. Methods: Veritas was designed using QT Designer, a ‘what-you-is-what-you-get’ (WYSIWIG) tool for building graphical user interfaces (GUI). Different components of the GUI are integrated using QT's signals and slotsmore » mechanism. Functionalities are added using PySide, an open source, cross platform Python binding for the QT framework. The XML code generated is immediately visible, making it an interactive learning tool. A user starts from an anonymized DICOM file or XML example and introduces delivery modifications, or begins their experiment from scratch, then uses the GUI to modify control points as desired. The software automatically generates XML plans following the appropriate schema. Results: Veritas was tested by generating and delivering two XML plans at Brigham and Women's Hospital. The first example was created to irradiate the letter ‘B’ with a narrow MV beam using dynamic couch movements. The second was created to acquire 4D CBCT projections for four minutes. The delivery of the letter ‘B’ was observed using a 2D array of ionization chambers. Both deliveries were generated quickly in Veritas by non-expert Developer Mode users. Conclusion: We introduced a new open source tool Veritas for generating XML plans (delivery trajectories and imaging sequences). Veritas makes Developer Mode more accessible by reducing the learning curve for quick translation of research ideas into XML plans. Veritas is an open source initiative, creating the possibility for future developments and collaboration with other researchers. I am an employee of Varian Medical Systems.« less

  11. Trajectory Assessment and Modification Tools for Next Generation Air Traffic Management Operations

    NASA Technical Reports Server (NTRS)

    Brasil, Connie; Lee, Paul; Mainini, Matthew; Lee, Homola; Lee, Hwasoo; Prevot, Thomas; Smith, Nancy

    2011-01-01

    This paper reviews three Next Generation Air Transportation System (NextGen) based high fidelity air traffic control human-in-the-loop (HITL) simulations, with a focus on the expected requirement of enhanced automated trajectory assessment and modification tools to support future air traffic flow management (ATFM) planning positions. The simulations were conducted at the National Aeronautics and Space Administration (NASA) Ames Research Centers Airspace Operations Laboratory (AOL) in 2009 and 2010. The test airspace for all three simulations assumed the mid-term NextGenEn-Route high altitude environment utilizing high altitude sectors from the Kansas City and Memphis Air Route Traffic Control Centers. Trajectory assessment, modification and coordination decision support tools were developed at the AOL in order to perform future ATFM tasks. Overall tool usage results and user acceptability ratings were collected across three areas of NextGen operatoins to evaluate the tools. In addition to the usefulness and usability feedback, feasibility issues, benefits, and future requirements were also addressed. Overall, the tool sets were rated very useful and usable, and many elements of the tools received high scores and were used frequently and successfully. Tool utilization results in all three HITLs showed both user and system benefits including better airspace throughput, reduced controller workload, and highly effective communication protocols in both full Data Comm and mixed-equipage environments.

  12. Reducing Missed Laboratory Results: Defining Temporal Responsibility, Generating User Interfaces for Test Process Tracking, and Retrospective Analyses to Identify Problems

    PubMed Central

    Tarkan, Sureyya; Plaisant, Catherine; Shneiderman, Ben; Hettinger, A. Zachary

    2011-01-01

    Researchers have conducted numerous case studies reporting the details on how laboratory test results of patients were missed by the ordering medical providers. Given the importance of timely test results in an outpatient setting, there is limited discussion of electronic versions of test result management tools to help clinicians and medical staff with this complex process. This paper presents three ideas to reduce missed results with a system that facilitates tracking laboratory tests from order to completion as well as during follow-up: (1) define a workflow management model that clarifies responsible agents and associated time frame, (2) generate a user interface for tracking that could eventually be integrated into current electronic health record (EHR) systems, (3) help identify common problems in past orders through retrospective analyses. PMID:22195201

  13. Discriminative motif optimization based on perceptron training

    PubMed Central

    Patel, Ronak Y.; Stormo, Gary D.

    2014-01-01

    Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152

  14. Using bio.tools to generate and annotate workbench tool descriptions

    PubMed Central

    Hillion, Kenzo-Hugo; Kuzmin, Ivan; Khodak, Anton; Rasche, Eric; Crusoe, Michael; Peterson, Hedi; Ison, Jon; Ménager, Hervé

    2017-01-01

    Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata. PMID:29333231

  15. Modular, Semantics-Based Composition of Biosimulation Models

    ERIC Educational Resources Information Center

    Neal, Maxwell Lewis

    2010-01-01

    Biosimulation models are valuable, versatile tools used for hypothesis generation and testing, codification of biological theory, education, and patient-specific modeling. Driven by recent advances in computational power and the accumulation of systems-level experimental data, modelers today are creating models with an unprecedented level of…

  16. Mobile Phones Democratize and Cultivate Next-Generation Imaging, Diagnostics and Measurement Tools

    PubMed Central

    Ozcan, Aydogan

    2014-01-01

    In this article, I discuss some of the emerging applications and the future opportunities and challenges created by the use of mobile phones and their embedded components for the development of next-generation imaging, sensing, diagnostics and measurement tools. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run e.g., biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally. PMID:24647550

  17. A field-emission based vacuum device for the generation of THz waves

    NASA Astrophysics Data System (ADS)

    Lin, Ming-Chieh

    2005-03-01

    Terahertz waves have been used to characterize the electronic, vibrational and compositional properties of solid, liquid and gas phase materials during the past decade. More and more applications in imaging science and technology call for the well development of THz wave sources. Amplification and generation of a high frequency electromagnetic wave are a common interest of field emission based devices. In the present work, we propose a vacuum electronic device based on field emission mechanism for the generation of THz waves. To verify our thinking and designs, the cold tests and the hot tests have been studied via the simulation tools, SUPERFISH and MAGIC. In the hot tests, two types of electron emission mechanisms are considered. One is the field emission and the other is the explosive emission. The preliminary design of the device is carried out and tested by the numerical simulations. The simulation results show that an electronic efficiency up to 4% can be achieved without employing any magnetic circuits.

  18. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Technical Reports Server (NTRS)

    Isaacson, Peter J.; DeLuccia, Frank J.; Reth, Alan D.; Igli, David A.; Carter, Delano R.

    2016-01-01

    Post-launch alignment errors for the Advanced Baseline Imager (ABI) and Geospatial Lightning Mapper (GLM) on GOES-R may be too large for the image navigation and registration (INR) processing algorithms to function without an initial adjustment to calibration parameters. We present an approach that leverages a combination of user-selected image-to-image tie points and image correlation algorithms to estimate this initial launch-induced offset and calculate adjustments to the Line of Sight Motion Compensation (LMC) parameters. We also present an approach to generate synthetic test images, to which shifts and rotations of known magnitude are applied. Results of applying the initial alignment tools to a subset of these synthetic test images are presented. The results for both ABI and GLM are within the specifications established for these tools, and indicate that application of these tools during the post-launch test (PLT) phase of GOES-R operations will enable the automated INR algorithms for both instruments to function as intended.

  19. [Steps to transform a necessity into a validated and useful screening tool for early detection of developmental problems in Mexican children].

    PubMed

    Rizzoli-Córdoba, Antonio; Delgado-Ginebra, Ismael

    A screening test is an instrument whose primary function is to identify individuals with a probable disease among an apparently healthy population, establishing risk or suspicion of a disease. Caution must be taken when using a screening tool in order to avoid unrealistic measurements, delaying an intervention for those who may benefit from it. Before introducing a screening test into clinical practice, it is necessary to certify the presence of some characteristics making its worth useful. This "certification" process is called validation. The main objective of this paper is to describe the different steps that must be taken, from the identification of a need for early detection through the generation of a validated and reliable screening tool using, as an example, the process for the modified version of the Child Development Evaluation Test (CDE or Prueba EDI) in Mexico. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  20. Can we use virtual reality tools in the planning of an experiment?

    NASA Astrophysics Data System (ADS)

    Kucaba-Pietal, Anna; Szumski, Marek; Szczerba, Piotr

    2015-03-01

    Virtual reality (VR) has proved to be a particularly useful tool in engineering and design. A related area of aviation in which VR is particularly significant is a flight training, as it requires many hours of practice and using real planes for all training is both expensive and more dangerous. Research conducted at the Rzeszow University of Technology (RUT) showed that virtual reality can be successfully used for planning experiment during a flight tests. Motivation to the study were a wing deformation measurements of PW-6 glider in flight by use Image Pattern Correlation Technique (IPCT) planned within the frame of AIM2 project. The tool VirlIPCT was constructed, which permits to perform virtual IPCT setup on an airplane. Using it, we can test a camera position, camera resolution, pattern application. Moreover performed tests on RUT indicate, that VirlIPCT can be used as a virtual IPCT image generator. This paper presents results of the research on VirlIPCT.

  1. Preliminary Computational Study for Future Tests in the NASA Ames 9 foot' x 7 foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Pearl, Jason M.; Carter, Melissa B.; Elmiligui, Alaa A.; WInski, Courtney S.; Nayani, Sudheer N.

    2016-01-01

    The NASA Advanced Air Vehicles Program, Commercial Supersonics Technology Project seeks to advance tools and techniques to make over-land supersonic flight feasible. In this study, preliminary computational results are presented for future tests in the NASA Ames 9 foot x 7 foot supersonic wind tunnel to be conducted in early 2016. Shock-plume interactions and their effect on pressure signature are examined for six model geometries. Near- field pressure signatures are assessed using the CFD code USM3D to model the proposed test geometries in free-air. Additionally, results obtained using the commercial grid generation software Pointwise Reigistered Trademark are compared to results using VGRID, the NASA Langley Research Center in-house mesh generation program.

  2. Small-Size High-Current Generators for X-Ray Backlighting

    NASA Astrophysics Data System (ADS)

    Chaikovsky, S. A.; Artyomov, A. P.; Zharova, N. V.; Zhigalin, A. S.; Lavrinovich, I. V.; Oreshkin, V. I.; Ratakhin, N. A.; Rousskikh, A. G.; Fedunin, A. V.; Fedushchak, V. F.; Erfort, A. A.

    2017-12-01

    The paper deals with the soft X-ray backlighting based on the X-pinch as a powerful tool for physical studies of fast processes. Proposed are the unique small-size pulsed power generators operating as a low-inductance capacitor bank. These pulse generators provide the X-pinch-based soft X-ray source (hν = 1-10 keV) of micron size at 2-3 ns pulse duration. The small size and weight of pulse generators allow them to be transported to any laboratory for conducting X-ray backlighting of test objects with micron space resolution and nanosecond exposure time. These generators also allow creating synchronized multi-frame radiographic complexes with frame delay variation in a broad range.

  3. Accurate estimation of short read mapping quality for next-generation genome sequencing

    PubMed Central

    Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas

    2012-01-01

    Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451

  4. Real-time forecasts of tomorrow's earthquakes in California: a new mapping tool

    USGS Publications Warehouse

    Gerstenberger, Matt; Wiemer, Stefan; Jones, Lucy

    2004-01-01

    We have derived a multi-model approach to calculate time-dependent earthquake hazard resulting from earthquake clustering. This file report explains the theoretical background behind the approach, the specific details that are used in applying the method to California, as well as the statistical testing to validate the technique. We have implemented our algorithm as a real-time tool that has been automatically generating short-term hazard maps for California since May of 2002, at http://step.wr.usgs.gov

  5. Astronaut tool development: An orbital replaceable unit-portable handhold

    NASA Technical Reports Server (NTRS)

    Redmon, John W., Jr.

    1989-01-01

    A tool to be used during astronaut Extra-Vehicular Activity (EVA) replacement of spent or defective electrical/electronic component boxes is described. The generation of requirements and design philosophies are detailed, as well as specifics relating to mechanical development, interface verifications, testing, and astronaut feedback. Findings are presented in the form of: (1) a design which is universally applicable to spacecraft component replacement, and (2) guidelines that the designer of orbital replacement units might incorporate to enhance spacecraft on-orbit maintainability and EVA mission safety.

  6. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  7. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    PubMed

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  8. Development and Feasibility Testing of a Critical Care EEG Monitoring Database for Standardized Clinical Reporting and Multicenter Collaborative Research.

    PubMed

    Lee, Jong Woo; LaRoche, Suzette; Choi, Hyunmi; Rodriguez Ruiz, Andres A; Fertig, Evan; Politsky, Jeffrey M; Herman, Susan T; Loddenkemper, Tobias; Sansevere, Arnold J; Korb, Pearce J; Abend, Nicholas S; Goldstein, Joshua L; Sinha, Saurabh R; Dombrowski, Keith E; Ritzl, Eva K; Westover, Michael B; Gavvala, Jay R; Gerard, Elizabeth E; Schmitt, Sarah E; Szaflarski, Jerzy P; Ding, Kan; Haas, Kevin F; Buchsbaum, Richard; Hirsch, Lawrence J; Wusthoff, Courtney J; Hopp, Jennifer L; Hahn, Cecil D

    2016-04-01

    The rapid expansion of the use of continuous critical care electroencephalogram (cEEG) monitoring and resulting multicenter research studies through the Critical Care EEG Monitoring Research Consortium has created the need for a collaborative data sharing mechanism and repository. The authors describe the development of a research database incorporating the American Clinical Neurophysiology Society standardized terminology for critical care EEG monitoring. The database includes flexible report generation tools that allow for daily clinical use. Key clinical and research variables were incorporated into a Microsoft Access database. To assess its utility for multicenter research data collection, the authors performed a 21-center feasibility study in which each center entered data from 12 consecutive intensive care unit monitoring patients. To assess its utility as a clinical report generating tool, three large volume centers used it to generate daily clinical critical care EEG reports. A total of 280 subjects were enrolled in the multicenter feasibility study. The duration of recording (median, 25.5 hours) varied significantly between the centers. The incidence of seizure (17.6%), periodic/rhythmic discharges (35.7%), and interictal epileptiform discharges (11.8%) was similar to previous studies. The database was used as a clinical reporting tool by 3 centers that entered a total of 3,144 unique patients covering 6,665 recording days. The Critical Care EEG Monitoring Research Consortium database has been successfully developed and implemented with a dual role as a collaborative research platform and a clinical reporting tool. It is now available for public download to be used as a clinical data repository and report generating tool.

  9. Next Generation Sequencing: A useful tool for detection of sugarcane viruses in quarantine programs

    USDA-ARS?s Scientific Manuscript database

    The international exchange of sugarcane germplasm includes the risk of introducing potentially devastating pathogens that may threaten production. The USDA-APHIS Plant Germplasm Quarantine Program (PGQP) imports and tests sugarcane accessions that are used in research, variety development, and comme...

  10. Goethe's Faust Revisited: Lessons from DIT Research.

    ERIC Educational Resources Information Center

    Nucci, Larry

    2002-01-01

    Discusses the Defining Issues Test as an invaluable tool for research and practice in moral education. Explains that because such instruments are based upon previous developmental research, they are unsuitable for research on moral development. Argues that these measures stand in the way of generating new knowledge. (CAJ)

  11. Clay Caterpillars: A Tool for Ecology & Evolution Laboratories

    ERIC Educational Resources Information Center

    Barber, Nicholas A.

    2012-01-01

    I present a framework for ecology and evolution laboratory exercises using artificial caterpillars made from modeling clay. Students generate and test hypotheses about predation rates on caterpillars that differ in appearance or "behavior" to understand how natural selection by predators shapes distribution and physical characteristics of…

  12. BioCluster: tool for identification and clustering of Enterobacteriaceae based on biochemical data.

    PubMed

    Abdullah, Ahmed; Sabbir Alam, S M; Sultana, Munawar; Hossain, M Anwar

    2015-06-01

    Presumptive identification of different Enterobacteriaceae species is routinely achieved based on biochemical properties. Traditional practice includes manual comparison of each biochemical property of the unknown sample with known reference samples and inference of its identity based on the maximum similarity pattern with the known samples. This process is labor-intensive, time-consuming, error-prone, and subjective. Therefore, automation of sorting and similarity in calculation would be advantageous. Here we present a MATLAB-based graphical user interface (GUI) tool named BioCluster. This tool was designed for automated clustering and identification of Enterobacteriaceae based on biochemical test results. In this tool, we used two types of algorithms, i.e., traditional hierarchical clustering (HC) and the Improved Hierarchical Clustering (IHC), a modified algorithm that was developed specifically for the clustering and identification of Enterobacteriaceae species. IHC takes into account the variability in result of 1-47 biochemical tests within this Enterobacteriaceae family. This tool also provides different options to optimize the clustering in a user-friendly way. Using computer-generated synthetic data and some real data, we have demonstrated that BioCluster has high accuracy in clustering and identifying enterobacterial species based on biochemical test data. This tool can be freely downloaded at http://microbialgen.du.ac.bd/biocluster/. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  13. Fabrication of a micro-fluid gathering tool for the gastrointestinal juice sampling function of a versatile capsular endoscope.

    PubMed

    Koo, Kyo-In; Lee, Sangmin; Cho, Dong-il Dan

    2011-01-01

    This paper presents a micro-fluid gathering tool for a versatile capsular endoscope that employs a solid chemical propellant, azobisisobutyronitrile (AIBN). The proposed tool consists of a micro-heater, an AIBN matrix, a Venturi tube, a reservoir, an inlet, and an outlet. The micro-heater heats the AIBN matrix to be decomposed into by-products and nitrogen gas. This nitrogen gas generates negative pressure passing through the Venturi tube. The generated negative pressure inhales a target fluid from around the inlet into the reservoir. All the parts are designed to be embedded inside a cylindrical shape with a diameter of 17 mm and a height of 2.3 mm in order to integrate it into a versatile developmental capsular endoscope without any scaledown. Two sets of the proposed tools are fabricated and tested: one is made of polydimethylsiloxane (PDMS) and the other is made of polymethylmethacrylate (PMMA). In performance comparisons, the PDMS gathering tool can withstand a stronger pulling force, and the PMMA gathering tool requires a less negative pressure for inhaling the same target fluid. Due to the instant and full activation of the thin AIBN matrix, both types of gathering tool show analogous performance in the sample gathering evaluation. The gathered volume is approximately 1.57 μL using approximately 25.4 μL of AIBN compound.

  14. Fabrication of a Micro-Fluid Gathering Tool for the Gastrointestinal Juice Sampling Function of a Versatile Capsular Endoscope

    PubMed Central

    Koo, Kyo-in; Lee, Sangmin; Cho, Dong-il Dan

    2011-01-01

    This paper presents a micro-fluid gathering tool for a versatile capsular endoscope that employs a solid chemical propellant, azobisisobutyronitrile (AIBN). The proposed tool consists of a micro-heater, an AIBN matrix, a Venturi tube, a reservoir, an inlet, and an outlet. The micro-heater heats the AIBN matrix to be decomposed into by-products and nitrogen gas. This nitrogen gas generates negative pressure passing through the Venturi tube. The generated negative pressure inhales a target fluid from around the inlet into the reservoir. All the parts are designed to be embedded inside a cylindrical shape with a diameter of 17 mm and a height of 2.3 mm in order to integrate it into a versatile developmental capsular endoscope without any scaledown. Two sets of the proposed tools are fabricated and tested: one is made of polydimethylsiloxane (PDMS) and the other is made of polymethylmethacrylate (PMMA). In performance comparisons, the PDMS gathering tool can withstand a stronger pulling force, and the PMMA gathering tool requires a less negative pressure for inhaling the same target fluid. Due to the instant and full activation of the thin AIBN matrix, both types of gathering tool show analogous performance in the sample gathering evaluation. The gathered volume is approximately 1.57 μL using approximately 25.4 μL of AIBN compound. PMID:22163997

  15. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  16. Next Generation Ship-Borne ASW-System: An Exemplary Exertion of Methodologies and Tools Applied According to the German Military Acquisition Guidelines

    DTIC Science & Technology

    2013-06-01

    as well as the evaluation of product parameters, operational test and functional limits. The product will be handed over to the designated ...which results in a system design that can be tested , produced, and fielded to satisfy the need. The concept development phase enables us to determine...specifications that can be tested or verified. The requirements presented earlier are the minimum necessary to allow the design process to find

  17. Fuzzy-based simulation of real color blindness.

    PubMed

    Lee, Jinmi; dos Santos, Wellington P

    2010-01-01

    About 8% of men are affected by color blindness. That population is at a disadvantage since they cannot perceive a substantial amount of the visual information. This work presents two computational tools developed to assist color blind people. The first one tests color blindness and assess its severity. The second tool is based on Fuzzy Logic, and implements a method proposed to simulate real red and green color blindness in order to generate synthetic cases of color vision disturbance in a statistically significant amount. Our purpose is to develop correction tools and obtain a deeper understanding of the accessibility problems faced by people with chromatic visual impairment.

  18. Software engineering techniques and CASE tools in RD13

    NASA Astrophysics Data System (ADS)

    Buono, S.; Gaponenko, I.; Jones, R.; Khodabandeh, A.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Skiadelli, M.; Toppers, A.; Duval, P. Y.; Ferrato, D.; Le Van Suu, A.; Qian, Z.; Rondot, C.; Ambrosini, G.; Fumagalli, G.; Polesello, G.; Aguer, M.; Huet, M.

    1994-12-01

    The RD13 project was approved in April 1991 for the development of a scalable data-taking system suitable for hosting various LHC studies. One of its goals is the exploitation of software engineering techniques, in order to indicate their overall suitability for data acquisition (DAQ), software design and implementation. This paper describes how such techniques have been applied to the development of components of the RD13 DAQ used in test-beam runs at CERN. We describe our experience with the Artifex CASE tool and its associated methodology. The issues raised when code generated by a CASE tool has to be integrated into an existing environment are also discussed.

  19. Investigation of Tapered Roller Bearing Damage Detection Using Oil Debris Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Krieder, Gary; Fichter, Thomas

    2006-01-01

    A diagnostic tool was developed for detecting fatigue damage to tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. This diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests performed by The Timken Company in their Tapered Roller Bearing Health Monitoring Test Rig. Failure progression tests were performed under simulated engine load conditions. Tests were performed on one healthy bearing and three predamaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor was monitored and recorded for the occurrence of debris generated during failure of the bearing. The bearing was removed periodically for inspection throughout the failure progression tests. Results indicate the accumulated oil debris mass is a good predictor of damage on tapered roller bearings. The use of a fuzzy logic model to enable an easily interpreted diagnostic metric was proposed and demonstrated.

  20. BETA: Behavioral testability analyzer and its application to high-level test generation and synthesis for testability. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Hsing

    1992-01-01

    In this thesis, a behavioral-level testability analysis approach is presented. This approach is based on analyzing the circuit behavioral description (similar to a C program) to estimate its testability by identifying controllable and observable circuit nodes. This information can be used by a test generator to gain better access to internal circuit nodes and to reduce its search space. The results of the testability analyzer can also be used to select test points or partial scan flip-flops in the early design phase. Based on selection criteria, a novel Synthesis for Testability approach call Test Statement Insertion (TSI) is proposed, which modifies the circuit behavioral description directly. Test Statement Insertion can also be used to modify circuit structural description to improve its testability. As a result, Synthesis for Testability methodology can be combined with an existing behavioral synthesis tool to produce more testable circuits.

  1. A Multi-Year Plan for Research, Development, and Prototype Testing of Standard Modular Hydropower Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Brennan T.; Welch, Tim; Witt, Adam M.

    The Multi-Year Plan for Research, Development, and Prototype Testing of Standard Modular Hydropower Technology (MYRP) presents a strategy for specifying, designing, testing, and demonstrating the efficacy of standard modular hydropower (SMH) as an environmentally compatible and cost-optimized renewable electricity generation technology. The MYRP provides the context, background, and vision for testing the SMH hypothesis: if standardization, modularity, and preservation of stream functionality become essential and fully realized features of hydropower technology, project design, and regulatory processes, they will enable previously unrealized levels of new project development with increased acceptance, reduced costs, increased predictability of outcomes, and increased value to stakeholders.more » To achieve success in this effort, the MYRP outlines a framework of stakeholder-validated criteria, models, design tools, testing facilities, and assessment protocols that will facilitate the development of next-generation hydropower technologies.« less

  2. Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony

    2009-01-01

    Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.

  3. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    NASA Astrophysics Data System (ADS)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  4. XS: a FASTQ read simulator.

    PubMed

    Pratas, Diogo; Pinho, Armando J; Rodrigues, João M O S

    2014-01-16

    The emerging next-generation sequencing (NGS) is bringing, besides the natural huge amounts of data, an avalanche of new specialized tools (for analysis, compression, alignment, among others) and large public and private network infrastructures. Therefore, a direct necessity of specific simulation tools for testing and benchmarking is rising, such as a flexible and portable FASTQ read simulator, without the need of a reference sequence, yet correctly prepared for producing approximately the same characteristics as real data. We present XS, a skilled FASTQ read simulation tool, flexible, portable (does not need a reference sequence) and tunable in terms of sequence complexity. It has several running modes, depending on the time and memory available, and is aimed at testing computing infrastructures, namely cloud computing of large-scale projects, and testing FASTQ compression algorithms. Moreover, XS offers the possibility of simulating the three main FASTQ components individually (headers, DNA sequences and quality-scores). XS provides an efficient and convenient method for fast simulation of FASTQ files, such as those from Ion Torrent (currently uncovered by other simulators), Roche-454, Illumina and ABI-SOLiD sequencing machines. This tool is publicly available at http://bioinformatics.ua.pt/software/xs/.

  5. Web Tools: The Second Generation

    ERIC Educational Resources Information Center

    Pascopella, Angela

    2008-01-01

    Web 2.0 tools and technologies, or second generation tools, help districts to save time and money, and eliminate the need to transfer or move files back and forth across computers. Many Web 2.0 tools help students think critically and solve problems, which falls under the 21st-century skills. The second-generation tools are growing in popularity…

  6. Nanoparticle generation and interactions with surfaces in vacuum systems

    NASA Astrophysics Data System (ADS)

    Khopkar, Yashdeep

    Extreme ultraviolet lithography (EUVL) is the most likely candidate as the next generation technology beyond immersion lithography to be used in high volume manufacturing in the semiconductor industry. One of the most problematic areas in the development process is the fabrication of mask blanks used in EUVL. As the masks are reflective, there is a chance that any surface aberrations in the form of bumps or pits could be printed on the silicon wafers. There is a strict tolerance to the number density of such defects on the mask that can be used in the final printing process. Bumps on the surface could be formed when particles land on the mask blank surface during the deposition of multiple bi-layers of molybdenum and silicon. To identify, and possibly mitigate the source of particles during mask fabrication, SEMATECH investigated particle generation in the VEECO Nexus deposition tool. They found several sources of particles inside the tool such as valves. To quantify the particle generation from vacuum components, a test bench suitable for evaluating particle generation in the sub-100 nm particle size range was needed. The Nanoparticle test bench at SUNY Polytechnic Institute was developed as a sub-set of the overall SEMATECH suite of metrology tools used to identify and quantify sources of particles inside process tools that utilize these components in the semiconductor industry. Vacuum valves were tested using the test bench to investigate the number, size and possible sources of particles inside the valves. Ideal parameters of valve operation were also investigated using a 300-mm slit valve with the end goal of finding optimized parameters for minimum particle generation. SEMATECH also pursued the development of theoretical models of particle transport replicating the expected conditions in an ion beam deposition chamber assuming that the particles were generated. In the case of the ion beam deposition tool used in the mask blank fabrication process, the ion beam in the tool could significantly accelerate particles. Assuming that these particles are transported to various surfaces inside the deposition tool, the next challenge is to enhance the adhesion of the particles on surfaces that are located in the non-critical areas inside the tool. However, for particles in the sub-100 nm size range, suitable methods do not exist that can compare the adhesion probability of particles upon impact for a wide range of impact velocities, surfaces and particle types. Traditional methods, which rely on optical measurement of particle velocities in the micron-size regime, cannot be used for sub-100 nm particles as the particles do not scatter sufficient light for the detectors to function. All the current methods rely on electrical measurements taken from impacting particles onto a surface. However, for sub-100 nm particles, the impact velocity varies in different regions of the same impaction spot. Therefore, electrical measurements are inadequate to quantify the exact adhesion characteristics at different impact velocities to enable a comparison of multiple particle-surface systems. Therefore, we propose a new method based on the use of scanning electron microscopy (SEM) imaging to study the adhesion of particles upon impact on surfaces. The use of SEM imaging allows for single particle detection across a single impaction spot and, therefore, enables the comparison of different regions with different impact velocities in a single impaction spot. The proposed method will provide comprehensive correlation between the adhesion probability of sub-100 nm particles and a wide range of impact velocities and angles. The location of each particle is compared with impact velocity predicted by using computational fluid dynamics methods to generate a comprehensive adhesion map involving the impact of 70 nm particles on a polished surface across a large impact velocity range. The final adhesion probability map shows higher adhesion at oblique impact angles compared to normal incidence impacts. Theoretical and experiments with micron-sized particles have shown that the contact area between the particle and the surface decreases at lower incidence angles which results in a decrease in the adhesion probability of the particle. The most likely cause of this result was the role of plastic deformation of particles and its effect on adhesion. Therefore, 70 nm sucrose particles were also impacted under similar impaction conditions to compare the role of plastic deformation on the adhesion characteristics of a particle. Sucrose particles have approximately 10 times more modulus of elasticity than Polystyrene Latex (PSL) particles and were found to have almost no adhesion on the surface at the same impact velocities where the highest adhesion of PSL particles was measured. Besides the role of plastic deformation, the influence of other possible errors in this process was investigated but not found to be significant. (Abstract shortened by UMI.).

  7. Applying Ecological Site Concepts and State-and-Transition Models to a Grazed Riparian Rangeland

    USDA-ARS?s Scientific Manuscript database

    Ecological site and state-and-transition models are useful tools for generating and testing hypotheses about drivers of vegetation composition in non-equilibrium systems, and have been widely implemented on rangelands. Compared to upland areas, little attention has been given to developing ecologica...

  8. Effectiveness of an Online Simulation for Teacher Education

    ERIC Educational Resources Information Center

    Badiee, Farnaz; Kaufman, David

    2014-01-01

    This study evaluated the effectiveness of the "simSchool" (v.1) simulation as a tool for preparing student teachers for actual classroom teaching. Twenty-two student teachers used the simulation for a practice session and two test sessions; data included objective performance statistics generated by the simulation program, self-rated…

  9. Helicopter Acoustics, part 2. [conferences

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Exterior and interior helicopter noise problems are addressed from the physics and engineering as well as the human factors point of view. Noise regulation concepts, human factors and criteria, rotor noise generation and control, design, operations and testing for noise control, helicopter noise prediction, and research tools and measurements are covered.

  10. Application of in vitro biopharmaceutical methods in development of immediate release oral dosage forms intended for paediatric patients.

    PubMed

    Batchelor, Hannah K; Kendall, Richard; Desset-Brethes, Sabine; Alex, Rainer; Ernest, Terry B

    2013-11-01

    Biopharmaceutics is routinely used in the design and development of medicines to generate science based evidence to predict in vivo performance; the application of this knowledge specifically to paediatric medicines development is yet to be explored. The aim of this review is to present the current status of available biopharmaceutical tools and tests including solubility, permeability and dissolution that may be appropriate for use in the development of immediate release oral paediatric medicines. The existing tools used in adults are discussed together with any limitations for their use within paediatric populations. The results of this review highlight several knowledge gaps in current methodologies in paediatric biopharmaceutics. The authors provide recommendations based on existing knowledge to adapt tests to better represent paediatric patient populations and also provide suggestions for future research that may lead to better tools to evaluate paediatric medicines. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Sim-based detection tools to minimize motorcycle theft

    NASA Astrophysics Data System (ADS)

    Triansyah, F. A.; Mudhafar, Z.; Lestari, C.; Amilia, S.; Ruswana, N. D.; Junaeti, E.

    2018-05-01

    The number of motorcycles in Indonesia spurs the increased criminal acts of motorcycle theft. In addition, the number of motorcycles increases the number of traffic accidents caused by improper motorists. The purpose of this research is to make METEOR (SIM Detector) which is a tool to detect the feasibility of SIM (driver license) which is used to operate and protect motorcycle against theft. METEOR is made through the assembly, encoding, testing, and sequencing stages of the motorcycle. Based on the research that has been done, METEOR generated that can detect the SIM by using additional RFID chip and can be set on the motorcycle. Without the proper SIM, motorized chests coupled with METEOR cannot be turned on. So it can be concluded that motorcycles with additional METEOR is able to be a safety device against theft and as a tool to test the feasibility of motorcycle riders.

  12. A Next Generation Sequencing custom gene panel as first line diagnostic tool for atypical cases of syndromic obesity: Application in a case of Alström syndrome.

    PubMed

    Maltese, Paolo E; Iarossi, Giancarlo; Ziccardi, Lucia; Colombo, Leonardo; Buzzonetti, Luca; Crinò, Antonino; Tezzele, Silvia; Bertelli, Matteo

    2018-02-01

    Obesity phenotype can be manifested as an isolated trait or accompanied by multisystem disorders as part of a syndromic picture. In both situations, same molecular pathways may be involved to different degrees. This evidence is stronger in syndromic obesity, in which phenotypes of different syndromes may overlap. In these cases, genetic testing can unequivocally provide a final diagnosis. Here we describe a patient who met the diagnostic criteria for Alström syndrome only during adolescence. Genetic testing was requested at 25 years of age for a final confirmation of the diagnosis. The genetic diagnosis of Alström syndrome was obtained through a Next Generation Sequencing genetic test approach using a custom-designed gene panel of 47 genes associated with syndromic and non-syndromic obesity. Genetic analysis revealed a novel homozygous frameshift variant p.(Arg1550Lysfs*10) on exon 8 of the ALMS1 gene. This case shows the need for a revision of the diagnostic criteria guidelines, as a consequence of the recent advent of massive parallel sequencing technology. Indications for genetic testing reported in these currently accepted diagnostic criteria for Alström syndrome, were drafted when sequencing was expensive and time consuming. Nowadays, Next Generation Sequencing testing could be considered as first line diagnostic tool not only for Alström syndrome but, more generally, for all those atypical or not clearly distinguishable cases of syndromic obesity, thus avoiding delayed diagnosis and treatments. Early diagnosis permits a better follow-up and pre-symptomatic interventions. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Diagnostic test accuracy of nutritional tools used to identify undernutrition in patients with colorectal cancer: a systematic review.

    PubMed

    Håkonsen, Sasja Jul; Pedersen, Preben Ulrich; Bath-Hextall, Fiona; Kirkpatrick, Pamela

    2015-05-15

    Effective nutritional screening, nutritional care planning and nutritional support are essential in all settings, and there is no doubt that a health service seeking to increase safety and clinical effectiveness must take nutritional care seriously. Screening and early detection of malnutrition is crucial in identifying patients at nutritional risk. There is a high prevalence of malnutrition in hospitalized patients undergoing treatment for colorectal cancer. To synthesize the best available evidence regarding the diagnostic test accuracy of nutritional tools (sensitivity and specificity) used to identify malnutrition (specifically undernutrition) in patients with colorectal cancer (such as the Malnutrition Screening Tool and Nutritional Risk Index) compared to reference tests (such as the Subjective Global Assessment or Patient Generated Subjective Global Assessment). Patients with colorectal cancer requiring either (or all) surgery, chemotherapy and/or radiotherapy in secondary care. Focus of the review: The diagnostic test accuracy of validated assessment tools/instruments (such as the Malnutrition Screening Tool and Nutritional Risk Index) in the diagnosis of malnutrition (specifically under-nutrition) in patients with colorectal cancer, relative to reference tests (Subjective Global Assessment or Patient Generated Subjective Global Assessment). Types of studies: Diagnostic test accuracy studies regardless of study design. Studies published in English, German, Danish, Swedish and Norwegian were considered for inclusion in this review. Databases were searched from their inception to April 2014. Methodological quality was determined using the Quality Assessment of Diagnostic Accuracy Studies checklist. Data was collected using the data extraction form: the Standards for Reporting Studies of Diagnostic Accuracy checklist for the reporting of studies of diagnostic accuracy. The accuracy of diagnostic tests is presented in terms of sensitivity, specificity, positive and negative predictive values. In addition, the positive likelihood ratio (sensitivity/ [1 - specificity]) and negative likelihood ratio (1 - sensitivity)/ specificity), were also calculated and presented in this review to provide information about the likelihood that a given test result would be expected when the target condition is present compared with the likelihood that the same result would be expected when the condition is absent. Not all trials reported true positive, true negative, false positive and false negative rates, therefore these rates were calculated based on the data in the published papers. A two-by-two truth table was reconstructed for each study, and sensitivity, specificity, positive predictive value, negative predictive value positive likelihood ratio and negative likelihood ratio were calculated for each study. A summary receiver operator characteristics curve was constructed to determine the relationship between sensitivity and specificity, and the area under the summary receiver operator characteristics curve which measured the usefulness of a test was calculated. Meta-analysis was not considered appropriate, therefore data was synthesized in a narrative summary. 1. One study evaluated the Malnutrition Screening Tool against the reference standard Patient-Generated Subjective Global Assessment. The sensitivity was 56% and the specificity 84%. The positive likelihood ratio was 3.100, negative likelihood ratio was 0.59, the diagnostic odds ratio (CI 95%) was 5.20 (1.09-24.90) and the Area Under the Curve (AUC) represents only a poor to fair diagnostic test accuracy. A total of two studies evaluated the diagnostic accuracy of Malnutrition Universal Screening Tool (MUST) (index test) compared to both Subjective Global Assessment (SGA) (reference standard) and PG-SGA (reference standard) in patients with colorectal cancer. In MUST vs SGA the sensitivity of the tool was 96%, specificity was 75%, LR+ 3.826, LR- 0.058, diagnostic OR (CI 95%) 66.00 (6.61-659.24) and AUC represented excellent diagnostic accuracy. In MUST vs PG-SGA the sensitivity of the tool was 72%, specificity 48.9%, LR+ 1.382, LR- 0.579, diagnostic OR (CI 95%) 2.39 (0.87-6.58) and AUC indicated that the tool failed as a diagnostic test to identify patients with colorectal cancer at nutritional risk,. The Nutrition Risk Index (NRI) was compared to SGA representing a sensitivity of 95.2%, specificity of 62.5%, LR+ 2.521, LR- 0.087, diagnostic OR (CI 95%) 28.89 (6.93-120.40) and AUC represented good diagnostic accuracy. In regard to NRI vs PG-SGA the sensitivity of the tool was 68%, specificity 64%, LR+ 1.947, LR- 0.487, diagnostic OR (CI 95%) 4.00 (1.23-13.01) and AUC indicated poor diagnostic test accuracy. There are no single, specific tools used to screen or assess the nutritional status of colorectal cancer patients. All tools showed varied diagnostic accuracies when compared to the reference standards SGA and PG-SGA. Hence clinical judgment combined with perhaps the SGA or PG-SGA should play a major role. The PG-SGA offers several advantages over the SGA tool: 1) the patient completes the medical history component, thereby decreasing the amount of time involved; 2) it contains more nutrition impact symptoms, which are important to the patient with cancer; and 3) it has a scoring system that allows patients to be triaged for nutritional intervention. Therefore, the PG-SGA could be used as a nutrition assessment tool as it allows quick identification and prioritization of colorectal cancer patients with malnutrition in combination with other parameters. This systematic review highlights the need for the following: Further studies needs to investigate the diagnostic accuracy of already existing nutritional screening tools in the context of colorectal cancer patients. If new screenings tools are developed, they should be developed and validated in the specific clinical context within the same patient population (colorectal cancer patients). The Joanna Briggs Institute.

  14. Critical thinking skills in midwifery practice: Development of a self-assessment tool for students.

    PubMed

    Carter, Amanda G; Creedy, Debra K; Sidebotham, Mary

    2017-07-01

    Develop and test a tool designed for use by pre-registration midwifery students to self-appraise their critical thinking in practice. A descriptive cohort design was used. All students (n=164) enrolled in a three-year Bachelor of Midwifery program in Queensland, Australia. The staged model for tool development involved item generation, mapping draft items to critical thinking concepts and expert review to test content validity, pilot testing of the tool to a convenience sample of students, and psychometric testing. Students (n=126, 76.8% response rate) provided demographic details, completed the new tool, and five questions from the Motivated Strategies for Learning Questionnaire (MSLQ) via an online platform or paper version. A high content validity index score of 0.97 was achieved through expert review. Construct validity via factor analysis revealed four factors: seeks information, reflects on practice, facilitates shared decision making, and evaluates practice. The mean total score for the tool was 124.98 (SD=12.58). Total and subscale scores correlated significantly. The scale achieved good internal reliability with a Cronbach's alpha coefficient of 0.92. Concurrent validity with the MSLQ subscale was 0.35 (p<0.001). This study established the reliability and validity of the CACTiM - student version for use by pre-registration midwifery students to self-assess critical thinking in practice. Critical thinking skills are vital for safe and effective midwifery practice. Students' assessment of their critical thinking development throughout their pre-registration programme makes these skills explicit, and could guide teaching innovation to address identified deficits. The availability of a reliable and valid tool assists research into the development of critical thinking in education and practice. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  15. Assessment of dysglycemia risk in the Kitikmeot region of Nunavut: using the CANRISK tool

    PubMed Central

    Ying, Jiang; Susan, Rogers Van Katwyk; Yang, Mao; Heather, Orpana; Gina, Agarwal; Margaret, de Groh; Monique, Skinner; Robyn, Clarke

    2017-01-01

    Abstract Introduction: The Public Health Agency of Canada adapted a Finnish diabetes screening tool (FINDRISC) to create a tool (CANRISK) tailored to Canada’s multi-ethnic population. CANRISK was developed using data collected in seven Canadian provinces. In an effort to extend the applicability of CANRISK to northern territorial populations, we completed a study with the mainly Inuit population in the Kitikmeot region of Nunavut. Methods: We obtained CANRISK questionnaires, physical measures and blood samples from participants in five Nunavut communities in Kitikmeot. We used logistic regression to test model fit using the original CANRISK risk factors for dysglycemia (prediabetes and diabetes). Dysglycemia was assessed using fasting plasma glucose (FPG) alone and/or oral glucose tolerance test. We generated participants’ CANRISK scores to test the functioning of this tool in the Inuit population. Results: A total of 303 individuals participated in the study. Half were aged less than 45 years, two-thirds were female and 84% were Inuit. A total of 18% had prediabetes, and an additional 4% had undiagnosed diabetes. The odds of having dysglycemia rose exponentially with age, while the relationship with BMI was U-shaped. Compared with lab test results, using a cut-off point of 32 the CANRISK tool achieved a sensitivity of 61%, a specificity of 66%, a positive predictive value of 34% and an accuracy rate of 65%. Conclusion: The CANRISK tool achieved a similar accuracy in detecting dysglycemia in this mainly Inuit population as it did in a multi-ethnic sample of Canadians. We found the CANRISK tool to be adaptable to the Kitikmeot region, and more generally to Nunavut. PMID:28402800

  16. SU-E-T-398: Feasibility of Automated Tools for Robustness Evaluation of Advanced Photon and Proton Techniques in Oropharyngeal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Liang, X; Kalbasi, A

    2014-06-01

    Purpose: Advanced radiotherapy (RT) techniques such as proton pencil beam scanning (PBS) and photon-based volumetric modulated arc therapy (VMAT) have dosimetric advantages in the treatment of head and neck malignancies. However, anatomic or alignment changes during treatment may limit robustness of PBS and VMAT plans. We assess the feasibility of automated deformable registration tools for robustness evaluation in adaptive PBS and VMAT RT of oropharyngeal cancer (OPC). Methods: We treated 10 patients with bilateral OPC with advanced RT techniques and obtained verification CT scans with physician-reviewed target and OAR contours. We generated 3 advanced RT plans for each patient: protonmore » PBS plan using 2 posterior oblique fields (2F), proton PBS plan using an additional third low-anterior field (3F), and a photon VMAT plan using 2 arcs (Arc). For each of the planning techniques, we forward calculated initial (Ini) plans on the verification scans to create verification (V) plans. We extracted DVH indicators based on physician-generated contours for 2 target and 14 OAR structures to investigate the feasibility of two automated tools (contour propagation (CP) and dose deformation (DD)) as surrogates for routine clinical plan robustness evaluation. For each verification scan, we compared DVH indicators of V, CP and DD plans in a head-to-head fashion using Student's t-test. Results: We performed 39 verification scans; each patient underwent 3 to 6 verification scan. We found no differences in doses to target or OAR structures between V and CP, V and DD, and CP and DD plans across all patients (p > 0.05). Conclusions: Automated robustness evaluation tools, CP and DD, accurately predicted dose distributions of verification (V) plans using physician-generated contours. These tools may be further developed as a potential robustness screening tool in the workflow for adaptive treatment of OPC using advanced RT techniques, reducing the need for physician-generated contours.« less

  17. NREL's Water Power Software Makes a Splash; NREL Highlights, Research & Development, NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-06-01

    WEC-Sim is a DOE-funded software tool being jointly developed by NREL and SNL. WEC-Sim computationally models wave energy converters (WEC), devices that generate electricity using movement of water systems such as oceans, rivers, etc. There is great potential for WECs to generate electricity, but as of yet, the industry has yet to establish a commercially viable concept. Modeling, design, and simulations tools are essential to the successful development of WECs. Commercial WEC modeling software tools can't be modified by the user. In contrast, WEC-Sim is a free, open-source, and flexible enough to be modified to meet the rapidly evolving needsmore » of the WEC industry. By modeling the power generation performance and dynamic loads of WEC designs, WEC-Sim can help support the development of new WEC devices by optimizing designs for cost of energy and competitiveness. By being easily accessible, WEC-Sim promises to help level the playing field in the WEC industry. Importantly, WEC-Sim is also excellent at its job! In 2014, WEC-Sim was used in conjunction with NREL’s FAST modeling software to win a hydrodynamic modeling competition. WEC-Sim and FAST performed very well at predicting the motion of a test device in comparison to other modeling tools. The most recent version of WEC-Sim (v1.1) was released in April 2015.« less

  18. Design and implementation in VHDL code of the two-dimensional fast Fourier transform for frequency filtering, convolution and correlation operations

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.

    2011-01-01

    The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.

  19. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  20. Estimating the Impacts of Direct Load Control Programs Using GridPIQ, a Web-Based Screening Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Seemita; Thayer, Brandon L.; Barrett, Emily L.

    In direct load control (DLC) programs, utilities can curtail the demand of participating loads to contractually agreed-upon levels during periods of critical peak load, thereby reducing stress on the system, generation cost, and required transmission and generation capacity. Participating customers receive financial incentives. The impacts of implementing DLC programs extend well beyond peak shaving. There may be a shift of load proportional to the interrupted load to the times before or after a DLC event, and different load shifts have different consequences. Tools that can quantify the impacts of such programs on load curves, peak demand, emissions, and fossil fuelmore » costs are currently lacking. The Grid Project Impact Quantification (GridPIQ) screening tool includes a Direct Load Control module, which takes into account project-specific inputs as well as the larger system context in order to quantify the impacts of a given DLC program. This allows users (utilities, researchers, etc.) to test and compare different program specifications and their impacts.« less

  1. Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM

    NASA Technical Reports Server (NTRS)

    Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip

    2017-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.

  2. NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.

    PubMed

    Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N

    2016-11-01

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.

  3. Plasma diagnostic development and UHV testing for the ALPHA collaboration at Marquette University

    NASA Astrophysics Data System (ADS)

    Tharp, T. D.; Alpha Collaboration

    2017-10-01

    At Marquette, we are developing the next generation of nonneutral plasma diagnostics for the ALPHA experiment at CERN. ALPHA is building a new vertical experiment to test the gravitational interaction of antihydrogen with Earth. This expansion requires significant changes to the design of our plasma diagnostic suites: the next generation of tools must be able to measure plasmas from two directions, and must be capable of operating in a horizontal position. The diagnostic suite includes measurements of plasma density, shape, and temperature. The hardware used includes a MicroChannel Plate (MCP), a Faraday Cup, and an electron gun. In addition, we are building a vacuum chamber to test the viability of 3-d printed components for UHV compatibility, with target pressures of 10-10 mbar.

  4. From data mining rules to medical logical modules and medical advices.

    PubMed

    Gomoi, Valentin; Vida, Mihaela; Robu, Raul; Stoicu-Tivadar, Vasile; Bernad, Elena; Lupşe, Oana

    2013-01-01

    Using data mining in collaboration with Clinical Decision Support Systems adds new knowledge as support for medical diagnosis. The current work presents a tool which translates data mining rules supporting generation of medical advices to Arden Syntax formalism. The developed system was tested with data related to 2326 births that took place in 2010 at the Bega Obstetrics - Gynaecology Hospital, Timişoara. Based on processing these data, 14 medical rules regarding the Apgar score were generated and then translated in Arden Syntax language.

  5. Identification of quasi-steady compressor characteristics from transient data

    NASA Technical Reports Server (NTRS)

    Nunes, K. B.; Rock, S. M.

    1984-01-01

    The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.

  6. CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000

    DTIC Science & Technology

    2000-06-01

    Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S

  7. SU-D-BRD-01: An Automated Physics Weekly Chart Checking System Supporting ARIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, X; Yang, D

    Purpose: A software tool was developed in this study to perform automatic weekly physics chart check on the patient data in ARIA. The tool accesses the electronic patient data directly from ARIA server and checks the accuracy of treatment deliveries, and generates reports which summarize the delivery history and highlight the errors. Methods: The tool has four modules. 1) The database interface is designed to directly access treatment delivery data from the ARIA database before reorganizing the data into the patient chart tree (PCT). 2) PCT is a core data structure designed to store and organize the data in logicalmore » hierarchies, and to be passed among functions. 3) The treatment data check module analyzes the organized data in PCT and stores the checking results into PCT. 4) Report generation module generates reports containing the treatment delivery summary, chart checking results and plots of daily treatment setup parameters (couch table positions, shifts of image guidance). The errors that are found by the tool are highlighted with colors. Results: The weekly check tool has been implemented in MATLAB and clinically tested at two major cancer centers. Javascript, cascading style sheets (CSS) and dynamic HTML were employed to create the user-interactive reports. It takes 0.06 second to search the delivery records of one beam with PCT and compare the delivery records with beam plan. The reports, saved in the HTML files on shared network folder, can be accessed by web browser on computers and mobile devices. Conclusion: The presented weekly check tool is useful to check the electronic patient treatment data in Varian ARIA system. It could be more efficient and reliable than the manually check by physicists. The work was partially supported by a research grant from Varian Medical System.« less

  8. Cyberwar XXI: quantifying the unquantifiable: adaptive AI for next-generation conflict simulations

    NASA Astrophysics Data System (ADS)

    Miranda, Joseph; von Kleinsmid, Peter; Zalewski, Tony

    2004-08-01

    The era of the "Revolution in Military Affairs," "4th Generation Warfare" and "Asymmetric War" requires novel approaches to modeling warfare at the operational and strategic level of modern conflict. For example, "What if, in response to our planned actions, the adversary reacts in such-and-such a manner? What will our response be? What are the possible unintended consequences?" Next generation conflict simulation tools are required to help create and test novel courses of action (COA's) in support of real-world operations. Conflict simulations allow non-lethal and cost-effective exploration of the "what-if" of COA development. The challenge has been to develop an automated decision-support software tool which allows competing COA"s to be compared in simulated dynamic environments. Principal Investigator Joseph Miranda's research is based on modeling an integrated military, economic, social, infrastructure and information (PMESII) environment. The main effort was to develop an adaptive AI engine which models agents operating within an operational-strategic conflict environment. This was implemented in Cyberwar XXI - a simulation which models COA selection in a PMESII environment. Within this framework, agents simulate decision-making processes and provide predictive capability of the potential behavior of Command Entities. The 2003 Iraq is the first scenario ready for V&V testing.

  9. Analyses of the Integration of Carbon Dioxide Removal Assembly, Compressor, Accumulator and Sabatier Carbon Dioxide Reduction Assembly

    NASA Technical Reports Server (NTRS)

    Jeng, Frank F.; Lafuse, Sharon; Smith, Frederick D.; Lu, Sao-Dung; Knox, James C.; Campbell, Mellssa L.; Scull, Timothy D.; Green Steve

    2010-01-01

    A tool has been developed by the Sabatier Team for analyzing/optimizing CO2 removal assembly, CO2 compressor size, its operation logic, water generation from Sabatier, utilization of CO2 from crew metabolic output, and Hz from oxygen generation assembly. Tests had been conducted using CDRA/Simulation compressor set-up at MSFC in 2003. Analysis of test data has validated CO2 desorption rate profile, CO2 compressor performance, CO2 recovery and CO2 vacuum vent in CDRA desorption. Optimizing the compressor size and compressor operation logic for an integrated closed air revitalization system Is being conducted by the Sabatier Team.

  10. Automated generation of individually customized visualizations of diagnosis-specific medical information using novel techniques of information extraction

    NASA Astrophysics Data System (ADS)

    Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang

    2005-04-01

    Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.

  11. Development of a high-temperature diagnostics-while-drilling tool.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavira, David J.; Huey, David; Hetmaniak, Chris

    2009-01-01

    The envisioned benefits of Diagnostics-While-Drilling (DWD) are based on the principle that high-speed, real-time information from the downhole environment will promote better control of the drilling process. Although in practice a DWD system could provide information related to any aspect of exploration and production of subsurface resources, the current DWD system provides data on drilling dynamics. This particular set of new tools provided by DWD will allow quicker detection of problems, reduce drilling flat-time and facilitate more efficient drilling (drilling optimization) with the overarching result of decreased drilling costs. In addition to providing the driller with an improved, real-time picturemore » of the drilling conditions downhole, data generated from DWD systems provides researchers with valuable, high fidelity data sets necessary for developing and validating enhanced understanding of the drilling process. Toward this end, the availability of DWD creates a synergy with other Sandia Geothermal programs, such as the hard-rock bit program, where the introduction of alternative rock-reduction technologies are contingent on the reduction or elimination of damaging dynamic effects. More detailed descriptions of the rationale for the program and early development efforts are described in more detail by others [SAND2003-2069 and SAND2000-0239]. A first-generation low-temperature (LT) DWD system was fielded in a series of proof-of-concept tests (POC) to validate functionality. Using the LT system, DWD was subsequently used to support a single-laboratory/multiple-partner CRADA (Cooperative Research and Development Agreement) entitled Advanced Drag Bits for Hard-Rock Drilling. The drag-bit CRADA was established between Sandia and four bit companies, and involved testing of a PDC bit from each company [Wise, et al., 2003, 2004] in the same lithologic interval at the Gas Technology Institute (GTI) test facility near Catoosa, OK. In addition, the LT DWD system has been fielded in cost-sharing efforts with an industrial partner to support the development of new generation hard-rock drag bits. Following the demonstrated success of the POC DWD system, efforts were initiated in FY05 to design, fabricate and test a high-temperature (HT) capable version of the DWD system. The design temperature for the HT DWD system was 225 C. Programmatic requirements dictated that a HT DWD tool be developed during FY05 and that a working system be demonstrated before the end of FY05. During initial design discussions regarding a high-temperature system it was decided that, to the extent possible, the HT DWD system would maintain functionality similar to the low temperature system, that is, the HT DWD system would also be designed to provide the driller with real-time information on bit and bottom-hole-assembly (BHA) dynamics while drilling. Additionally, because of time and fiscal constraints associated with the HT system development, the design of the HT DWD tool would follow that of the LT tool. The downhole electronics package would be contained in a concentrically located pressure barrel and the use of externally applied strain gages with thru-tool connectors would also be used in the new design. Also, in order to maximize the potential wells available for the HT DWD system and to allow better comparison with the low-temperature design, the diameter of the tool was maintained at 7-inches. This report discusses the efforts associated with the development of a DWD system capable of sustained operation at 225 C. This report documents work performed in the second phase of the Diagnostics-While-Drilling (DWD) project in which a high-temperature (HT) version of the phase 1 low-temperature (LT) proof-of-concept (POC) DWD tool was built and tested. Descriptions of the design, fabrication and field testing of the HT tool are provided. Background on prior phases of the project can be found in SAND2003-2069 and SAND2000-0239.« less

  12. State-of-the-Art Fusion-Finder Algorithms Sensitivity and Specificity

    PubMed Central

    Carrara, Matteo; Beccuti, Marco; Lazzarato, Fulvio; Cavallo, Federica; Cordero, Francesca; Donatelli, Susanna; Calogero, Raffaele A.

    2013-01-01

    Background. Gene fusions arising from chromosomal translocations have been implicated in cancer. RNA-seq has the potential to discover such rearrangements generating functional proteins (chimera/fusion). Recently, many methods for chimeras detection have been published. However, specificity and sensitivity of those tools were not extensively investigated in a comparative way. Results. We tested eight fusion-detection tools (FusionHunter, FusionMap, FusionFinder, MapSplice, deFuse, Bellerophontes, ChimeraScan, and TopHat-fusion) to detect fusion events using synthetic and real datasets encompassing chimeras. The comparison analysis run only on synthetic data could generate misleading results since we found no counterpart on real dataset. Furthermore, most tools report a very high number of false positive chimeras. In particular, the most sensitive tool, ChimeraScan, reports a large number of false positives that we were able to significantly reduce by devising and applying two filters to remove fusions not supported by fusion junction-spanning reads or encompassing large intronic regions. Conclusions. The discordant results obtained using synthetic and real datasets suggest that synthetic datasets encompassing fusion events may not fully catch the complexity of RNA-seq experiment. Moreover, fusion detection tools are still limited in sensitivity or specificity; thus, there is space for further improvement in the fusion-finder algorithms. PMID:23555082

  13. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    PubMed

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible.

  14. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road

    PubMed Central

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on “on-demand payment” for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. PMID:26230400

  15. SmaggIce 2D Version 1.8: Software Toolkit Developed for Aerodynamic Simulation Over Iced Airfoils

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Vickerman, Mary B.

    2005-01-01

    SmaggIce 2D version 1.8 is a software toolkit developed at the NASA Glenn Research Center that consists of tools for modeling the geometry of and generating the grids for clean and iced airfoils. Plans call for the completed SmaggIce 2D version 2.0 to streamline the entire aerodynamic simulation process--the characterization and modeling of ice shapes, grid generation, and flow simulation--and to be closely coupled with the public-domain application flow solver, WIND. Grid generated using version 1.8, however, can be used by other flow solvers. SmaggIce 2D will help researchers and engineers study the effects of ice accretion on airfoil performance, which is difficult to do with existing software tools because of complex ice shapes. Using SmaggIce 2D, when fully developed, to simulate flow over an iced airfoil will help to reduce the cost of performing flight and wind-tunnel tests for certifying aircraft in natural and simulated icing conditions.

  16. Generation of Functional Thyroid Tissue Using 3D-Based Culture of Embryonic Stem Cells.

    PubMed

    Antonica, Francesco; Kasprzyk, Dominika Figini; Schiavo, Andrea Alex; Romitti, Mírian; Costagliola, Sabine

    2017-01-01

    During the last decade three-dimensional (3D) cultures of pluripotent stem cells have been intensively used to understand morphogenesis and molecular signaling important for the embryonic development of many tissues. In addition, pluripotent stem cells have been shown to be a valid tool for the in vitro modeling of several congenital or chronic human diseases, opening new possibilities to study their physiopathology without using animal models. Even more interestingly, 3D culture has proved to be a powerful and versatile tool to successfully generate functional tissues ex vivo. Using similar approaches, we here describe a protocol for the generation of functional thyroid tissue using mouse embryonic stem cells and give all the details and references for its characterization and analysis both in vitro and in vivo. This model is a valid approach to study the expression and the function of genes involved in the correct morphogenesis of thyroid gland, to elucidate the mechanisms of production and secretion of thyroid hormones and to test anti-thyroid drugs.

  17. Computational simulations of supersonic magnetohydrodynamic flow control, power and propulsion systems

    NASA Astrophysics Data System (ADS)

    Wan, Tian

    This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.

  18. An Embedded Rule-Based Diagnostic Expert System in Ada

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Liberman, Eugene M.

    1992-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  19. MO-F-16A-01: Implementation of MPPG TPS Verification Tests On Various Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smilowitz, J; Bredfeldt, J; Geurts, M

    2014-06-15

    Purpose: To demonstrate the implementation of the Medical Physics Practice Guideline (MPPG) for dose calculation and beam parameters verification of treatment planning systems (TPS). Methods: We implemented the draft TPS MPPG for three linacs: Varian Trilogy, TomoHDA and Elekta Infinity. Static and modulated test plans were created. The static fields are different than used in commissioning. Data was collected using ion chambers and diodes in a scanning water tank, Delta4 phantom and a custom phantom. MatLab and Microsoft Excel were used to create analysis tools to compare reference DICOM dose with scan data. This custom code allowed for the interpolation,more » registration and gamma analysis of arbitrary dose profiles. It will be provided as open source code. IMRT fields were validated with Delta4 registration and comparison tools. The time for each task was recorded. Results: The tests confirmed the strengths, and revealed some limitations, of our TPS. The agreement between calculated and measured dose was reported for all beams. For static fields, percent depth dose and profiles were analyzed with criteria in the draft MPPG. The results reveal areas of slight mismatch with the model (MLC leaf penumbra, buildup region.) For TomoTherapy, the IMRT plan 2%/2 mm gamma analysis revealed poorest agreement in the low dose regions. For one static test plan for all 10MV Trilogy photon beams, the plan generation, scan queue creation, data collection, data analysis and report took 2 hours, excluding tank setup. Conclusions: We have demonstrated the implementation feasibility of the TPS MPPG. This exercise generated an open source tool for dose comparisons between scan data and DICOM dose data. An easily reproducible and efficient infrastructure with streamlined data collection was created for repeatable robust testing of the TPS. The tests revealed minor discrepancies in our models and areas for improvement that are being investigated.« less

  20. Ada Dual-Use Summary: Ada Dual-Use Workshop Held in Vienna, Virginia on October 19-20, 1993. Ada Dual-Use Committee Briefing, November 8, 1993

    DTIC Science & Technology

    1993-11-29

    Certification: Initial Continuing Fund Experimentatlonal Research: Same Design , Implement In Ada, C, C++ Same Problem, Develop With Multiple Methodologies ...allowing analysts ( non programmers) to ’parit’ specifications for screens, reports, databases and etc 2) generating from design specifications 75% of...before the non -defense sector did and designed a tool to tackle the problem. DOD tested the tool and it worked. But DOD hasn’t put Ada to work in a

  1. Testing simple deceptive honeypot tools

    NASA Astrophysics Data System (ADS)

    Yahyaoui, Aymen; Rowe, Neil C.

    2015-05-01

    Deception can be a useful defensive technique against cyber-attacks; it has the advantage of unexpectedness to attackers and offers a variety of tactics. Honeypots are a good tool for deception. They act as decoy computers to confuse attackers and exhaust their time and resources. This work tested the effectiveness of two free honeypot tools in real networks by varying their location and virtualization, and the effects of adding more deception to them. We tested a Web honeypot tool, Glastopf and an SSH honeypot tool Kippo. We deployed the Web honeypot in both a residential network and our organization's network and as both real and virtual machines; the organization honeypot attracted more attackers starting in the third week. Results also showed that the virtual honeypots received attacks from more unique IP addresses. They also showed that adding deception to the Web honeypot, in the form of additional linked Web pages and interactive features, generated more interest by attackers. For the purpose of comparison, we used examined log files of a legitimate Web-site www.cmand.org. The traffic distributions for the Web honeypot and the legitimate Web site showed similarities (with much malicious traffic from Brazil), but the SSH honeypot was different (with much malicious traffic from China). Contrary to previous experiments where traffic to static honeypots decreased quickly, our honeypots received increasing traffic over a period of three months. It appears that both honeypot tools are useful for providing intelligence about cyber-attack methods, and that additional deception is helpful.

  2. Translating expert system rules into Ada code with validation and verification

    NASA Technical Reports Server (NTRS)

    Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam

    1991-01-01

    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.

  3. Generation of GHS Scores from TEST and online sources ...

    EPA Pesticide Factsheets

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat toxicity, developmental toxicity, endocrine activity, and mutagenicity. It can be used to evaluate ecotoxicity (in terms of acute fathead minnow toxicity) and fate (in terms of bioconcentration factor). It also be used to estimate a variety of key physicochemical properties such as melting point, boiling point, vapor pressure, water solubility, and bioconcentration factor. A web-based version of T.E.S.T. is currently being developed to allow predictions to be made from other web tools. Online data sources such as from NCCT’s Chemistry Dashboard, REACH dossiers, or from ChemHat.org can also be utilized to obtain GHS (Global Harmonization System) scores for comparing alternatives. The purpose of this talk is to show how GHS (Global Harmonization Score) data can be obtained from literature sources and from T.E.S.T. (Toxicity Estimation Software Tool). This data will be used to compare chemical alternatives in the alternatives assessment dashboard (a 2018 CSS product).

  4. Multi-Megawatt-Scale Power-Hardware-in-the-Loop Interface for Testing Ancillary Grid Services by Converter-Coupled Generation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koralewicz, Przemyslaw J; Gevorgian, Vahan; Wallen, Robert B

    Power-hardware-in-the-loop (PHIL) is a simulation tool that can support electrical systems engineers in the development and experimental validation of novel, advanced control schemes that ensure the robustness and resiliency of electrical grids that have high penetrations of low-inertia variable renewable resources. With PHIL, the impact of the device under test on a generation or distribution system can be analyzed using a real-time simulator (RTS). PHIL allows for the interconnection of the RTS with a 7 megavolt ampere (MVA) power amplifier to test multi-megawatt renewable assets available at the National Wind Technology Center (NWTC). This paper addresses issues related to themore » development of a PHIL interface that allows testing hardware devices at actual scale. In particular, the novel PHIL interface algorithm and high-speed digital interface, which minimize the critical loop delay, are discussed.« less

  5. Multi-Megawatt-Scale Power-Hardware-in-the-Loop Interface for Testing Ancillary Grid Services by Converter-Coupled Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koralewicz, Przemyslaw J; Gevorgian, Vahan; Wallen, Robert B

    Power-hardware-in-the-loop (PHIL) is a simulation tool that can support electrical systems engineers in the development and experimental validation of novel, advanced control schemes that ensure the robustness and resiliency of electrical grids that have high penetrations of low-inertia variable renewable resources. With PHIL, the impact of the device under test on a generation or distribution system can be analyzed using a real-time simulator (RTS). PHIL allows for the interconnection of the RTS with a 7 megavolt ampere (MVA) power amplifier to test multi-megawatt renewable assets available at the National Wind Technology Center (NWTC). This paper addresses issues related to themore » development of a PHIL interface that allows testing hardware devices at actual scale. In particular, the novel PHIL interface algorithm and high-speed digital interface, which minimize the critical loop delay, are discussed.« less

  6. Tuberculosis vaccines: barriers and prospects on the quest for a transformative tool.

    PubMed

    Karp, Christopher L; Wilson, Christopher B; Stuart, Lynda M

    2015-03-01

    The road to a more efficacious vaccine that could be a truly transformative tool for decreasing tuberculosis morbidity and mortality, along with Mycobacterium tuberculosis transmission, is quite daunting. Despite this, there are reasons for optimism. Abetted by better conceptual clarity, clear acknowledgment of the degree of our current immunobiological ignorance, the availability of powerful new tools for dissecting the immunopathogenesis of human tuberculosis, the generation of more creative diversity in tuberculosis vaccine concepts, the development of better fit-for-purpose animal models, and the potential of more pragmatic approaches to the clinical testing of vaccine candidates, the field has promise for delivering novel tools for dealing with this worldwide scourge of poverty. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.

  7. "Clustering" Documents Automatically to Support Scoping Reviews of Research: A Case Study

    ERIC Educational Resources Information Center

    Stansfield, Claire; Thomas, James; Kavanagh, Josephine

    2013-01-01

    Background: Scoping reviews of research help determine the feasibility and the resource requirements of conducting a systematic review, and the potential to generate a description of the literature quickly is attractive. Aims: To test the utility and applicability of an automated clustering tool to describe and group research studies to improve…

  8. High Throughput PBTK: Evaluating EPA’s Open-Source Data and Tools for Dosimetry and Exposure Reconstruction

    EPA Science Inventory

    Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics (TK). While HTS generates in vitro bioactivity d...

  9. Transcriptome amplification coupled with nanopore sequencing as a surveillance tool for plant pathogens in plant and insect tissues

    USDA-ARS?s Scientific Manuscript database

    There are many plant pathogen-specific diagnostic assays, based on PCR and immune-detection. However, the ability to test for large numbers of pathogens simultaneously is lacking. Next generation sequencing (NGS) allows one to detect all organisms within a given sample, but has computational limitat...

  10. Launch Control System Software Development System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.

  11. Incompleteness of Bluetooth protocol conformance test cases

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Gao, Qiang

    2001-10-01

    This paper describes a formal method to verify the completeness of conformance testing, in which not only Implementation Under Test (IUT) is formalized in SDL, but also conformance tester is described in SDL so that conformance testing can be performed in simulator provided with CASE tool. The protocol set considered is Bluetooth, an open wireless communication technology. Our research results show that Bluetooth conformance test specification is not complete in that it has only limited coverage and many important capabilities defined in Bluetooth core specification are not tested. We also give a detail report on the missing test cases against Bluetooth core specification, and provide a guide on further test case generation in the future.

  12. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  13. Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cipiti, Benjamin B.; Shoman, Nathan

    The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less

  14. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  15. Critical thinking evaluation in reflective writing: Development and testing of Carter Assessment of Critical Thinking in Midwifery (Reflection).

    PubMed

    Carter, Amanda G; Creedy, Debra K; Sidebotham, Mary

    2017-11-01

    develop and test a tool designed for use by academics to evaluate pre-registration midwifery students' critical thinking skills in reflective writing. a descriptive cohort design was used. a random sample (n = 100) of archived student reflective writings based on a clinical event or experience during 2014 and 2015. a staged model for tool development was used to develop a fifteen item scale involving item generation; mapping of draft items to critical thinking concepts and expert review to test content validity; inter-rater reliability testing; pilot testing of the tool on 100 reflective writings; and psychometric testing. Item scores were analysed for mean, range and standard deviation. Internal reliability, content and construct validity were assessed. expert review of the tool revealed a high content validity index score of 0.98. Using two independent raters to establish inter-rater reliability, good absolute agreement of 72% was achieved with a Kappa coefficient K = 0.43 (p<0.0001). Construct validity via exploratory factor analysis revealed three factors: analyses context, reasoned inquiry, and self-evaluation. The mean total score for the tool was 50.48 (SD = 12.86). Total and subscale scores correlated significantly. The scale achieved good internal reliability with a Cronbach's alpha coefficient of .93. this study establishedthe reliability and validity of the CACTiM (reflection) for use by academics to evaluate midwifery students' critical thinking in reflective writing. Validation with large diverse samples is warranted. reflective practice is a key learning and teaching strategy in undergraduate Bachelor of Midwifery programmes and essential for safe, competent practice. There is the potential to enhance critical thinking development by assessingreflective writing with the CACTiM (reflection) tool to provide formative and summative feedback to students and inform teaching strategies. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  16. Design and fabrication of composite blades for the Mod-1 wind turbine generator

    NASA Technical Reports Server (NTRS)

    Batesole, W. R.; Gunsallus, C. T.

    1981-01-01

    The design, tooling, fabrication, quality control, and testing phases carried out to date, as well as testing still planned are described. Differences from the 150 foot blade which were introduced for cost and manufacturing improvement purposes are discussed as well as the lightning protection system installed in the blades. Actual costs and manhours expended for Blade No. 2 are provided as a base, along with a projection of costs for the blade in production.

  17. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis.

    PubMed

    Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu

    2003-11-07

    To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s).

  18. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

    PubMed Central

    Pan, Deyun; Sun, Ning; Cheung, Kei-Hoi; Guan, Zhong; Ma, Ligeng; Holford, Matthew; Deng, Xingwang; Zhao, Hongyu

    2003-01-01

    Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i) upload and populate microarray data into a database; (ii) integrate gene expression with enzymes of the pathways; (iii) generate pathway diagrams without building image files manually; (iv) visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v) perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i) automatic generation of pathways associated with gene expression and (ii) statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s). PMID:14604444

  19. Systems Prototyping with Fourth Generation Tools.

    ERIC Educational Resources Information Center

    Sholtys, Phyllis

    1983-01-01

    The development of information systems using an engineering approach that uses both traditional programing techniques and fourth generation software tools is described. Fourth generation applications tools are used to quickly develop a prototype system that is revised as the user clarifies requirements. (MLW)

  20. The Rack-Gear Tool Generation Modelling. Non-Analytical Method Developed in CATIA, Using the Relative Generating Trajectories Method

    NASA Astrophysics Data System (ADS)

    Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.

    2016-11-01

    The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.

  1. Microfluidic dissolved oxygen gradient generator biochip as a useful tool in bacterial biofilm studies.

    PubMed

    Skolimowski, Maciej; Nielsen, Martin Weiss; Emnéus, Jenny; Molin, Søren; Taboryski, Rafael; Sternberg, Claus; Dufva, Martin; Geschke, Oliver

    2010-08-21

    A microfluidic chip for generation of gradients of dissolved oxygen was designed, fabricated and tested. The novel way of active oxygen depletion through a gas permeable membrane was applied. Numerical simulations for generation of O(2) gradients were correlated with measured oxygen concentrations. The developed microsystem was used to study growth patterns of the bacterium Pseudomonas aeruginosa in medium with different oxygen concentrations. The results showed that attachment of Pseudomonas aeruginosa to the substrate changed with oxygen concentration. This demonstrates that the device can be used for studies requiring controlled oxygen levels and for future studies of microaerobic and anaerobic conditions.

  2. How to perform a critically appraised topic: part 2, appraise, evaluate, generate, and recommend.

    PubMed

    Kelly, Aine Marie; Cronin, Paul

    2011-11-01

    This article continues the discussion of a critically appraised topic started in Part 1. A critically appraised topic is a practical tool for learning and applying critical appraisal skills. This article outlines steps 4-7 involved in performing a critically appraised topic for studies of diagnostic tests: Appraise, Appraise the literature; Evaluate, evaluate the strength of the evidence from the literature; Generate, generate graphs of conditional probability; and Recommend, draw conclusions and make recommendations. For steps 4-7 of performing a critically appraised topic, the main study results are summarized and translated into clinically useful measures of accuracy, efficacy, or risk.

  3. Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madsen, N.; Meredith, S.

    Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less

  4. Next Generation Programmable Bio-Nano-Chip System for On-Site Detection in Oral Fluids.

    PubMed

    Christodoulides, Nicolaos; De La Garza, Richard; Simmons, Glennon W; McRae, Michael P; Wong, Jorge; Newton, Thomas F; Kosten, Thomas R; Haque, Ahmed; McDevitt, John T

    2015-11-23

    Current on-site drug of abuse detection methods involve invasive sampling of blood and urine specimens, or collection of oral fluid, followed by qualitative screening tests using immunochromatographic cartridges. Test confirmation and quantitative assessment of a presumptive positive are then provided by remote laboratories, an inefficient and costly process decoupled from the initial sampling. Recently, a new noninvasive oral fluid sampling approach that is integrated with the chip-based Programmable Bio-Nano-Chip (p-BNC) platform has been developed for the rapid (~ 10 minutes), sensitive detection (~ ng/ml) and quantitation of 12 drugs of abuse. Furthermore, the system can provide the time-course of select drug and metabolite profiles in oral fluids. For cocaine, we observed three slope components were correlated with cocaine-induced impairment using this chip-based p-BNC detection modality. Thus, this p-BNC has significant potential for roadside drug testing by law enforcement officers. Initial work reported on chip-based drug detection was completed using 'macro' or "chip in the lab" prototypes, that included metal encased "flow cells", external peristaltic pumps and a bench-top analyzer system instrumentation. We now describe the next generation miniaturized analyzer instrumentation along with customized disposables and sampling devices. These tools will offer real-time oral fluid drug monitoring capabilities, to be used for roadside drug testing as well as testing in clinical settings as a non-invasive, quantitative, accurate and sensitive tool to verify patient adherence to treatment.

  5. Generation of Leishmania Hybrids by Whole Genomic DNA Transformation

    PubMed Central

    Coelho, Adriano C.; Leprohon, Philippe; Ouellette, Marc

    2012-01-01

    Genetic exchange is a powerful tool to study gene function in microorganisms. Here, we tested the feasibility of generating Leishmania hybrids by electroporating genomic DNA of donor cells into recipient Leishmania parasites. The donor DNA was marked with a drug resistance marker facilitating the selection of DNA transfer into the recipient cells. The transferred DNA was integrated exclusively at homologous locus and was as large as 45 kb. The independent generation of L. infantum hybrids with L. major sequences was possible for several chromosomal regions. Interfering with the mismatch repair machinery by inactivating the MSH2 gene enabled an increased efficiency of recombination between divergent sequences, hence favouring the selection of hybrids between species. Hybrids were shown to acquire the phenotype derived from the donor cells, as demonstrated for the transfer of drug resistance genes from L. major into L. infantum. The described method is a first step allowing the generation of in vitro hybrids for testing gene functions in a natural genomic context in the parasite Leishmania. PMID:23029579

  6. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  7. Bacteria-powered battery on paper.

    PubMed

    Fraiwan, Arwa; Choi, Seokheun

    2014-12-21

    Paper-based devices have recently emerged as simple and low-cost paradigms for fluid manipulation and analytical/clinical testing. However, there are significant challenges in developing paper-based devices at the system level, which contain integrated paper-based power sources. Here, we report a microfabricated paper-based bacteria-powered battery that is capable of generating power from microbial metabolism. The battery on paper showed a very short start-up time relative to conventional microbial fuel cells (MFCs); paper substrates eliminated the time traditional MFCs required to accumulate and acclimate bacteria on the anode. Only four batteries connected in series provided desired values of current and potential to power an LED for more than 30 minutes. The battery featured (i) a low-cost paper-based proton exchange membrane directly patterned on commercially available parchment paper and (ii) paper reservoirs for holding the anolyte and the catholyte for an extended period of time. Based on this concept, we also demonstrate the use of paper-based test platforms for the rapid characterization of electricity-generating bacteria. This paper-based microbial screening tool does not require external pumps/tubings and represents the most rapid test platform (<50 min) compared with the time needed by using traditional screening tools (up to 103 days) and even recently proposed MEMS arrays (< 2 days).

  8. Proposed Facility Modifications to Support Propulsion Systems Testing Under Simulated Space Conditions at Plum Brook Station's Spacecraft Propulsion Research Facility (B-2)

    NASA Technical Reports Server (NTRS)

    Edwards, Daryl A.

    2008-01-01

    Preparing NASA's Plum Brook Station's Spacecraft Propulsion Research Facility (B-2) to support NASA's new generation of launch vehicles has raised many challenges for B-2's support staff. The facility provides a unique capability to test chemical propulsion systems/vehicles while simulating space thermal and vacuum environments. Designed and constructed in the early 1960s to support upper stage cryogenic engine/vehicle system development, the Plum Brook Station B-2 facility will require modifications to support the larger, more powerful, and more advanced engine systems for the next generation of vehicles leaving earth's orbit. Engine design improvements over the years have included large area expansion ratio nozzles, greater combustion chamber pressures, and advanced materials. Consequently, it has become necessary to determine what facility changes are required and how the facility can be adapted to support varying customers and their specific test needs. Exhaust system performance, including understanding the present facility capabilities, is the primary focus of this work. A variety of approaches and analytical tools are being employed to gain this understanding. This presentation discusses some of the challenges in applying these tools to this project and expected facility configuration to support the varying customer needs.

  9. Proposed Facility Modifications to Support Propulsion Systems Testing Under Simulated Space Conditions at Plum Brook Station's Spacecraft Propulsion Research Facility (B-2)

    NASA Technical Reports Server (NTRS)

    Edwards, Daryl A.

    2007-01-01

    Preparing NASA's Plum Brook Station's Spacecraft Propulsion Research Facility (B-2) to support NASA's new generation of launch vehicles has raised many challenges for B-2 s support staff. The facility provides a unique capability to test chemical propulsion systems/vehicles while simulating space thermal and vacuum environments. Designed and constructed 4 decades ago to support upper stage cryogenic engine/vehicle system development, the Plum Brook Station B-2 facility will require modifications to support the larger, more powerful, and more advanced engine systems for the next generation of vehicles leaving earth's orbit. Engine design improvements over the years have included large area expansion ratio nozzles, greater combustion chamber pressures, and advanced materials. Consequently, it has become necessary to determine what facility changes are required and how the facility can be adapted to support varying customers and their specific test needs. Instrumental in this task is understanding the present facility capabilities and identifying what reasonable changes can be implemented. A variety of approaches and analytical tools are being employed to gain this understanding. This paper discusses some of the challenges in applying these tools to this project and expected facility configuration to support the varying customer needs.

  10. Controllable Grid Interface for Testing Ancillary Service Controls and Fault Performance of Utility-Scale Wind Power Generation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorgian, Vahan; Koralewicz, Przemyslaw; Wallen, Robb

    The rapid expansion of wind power has led many transmission system operators to demand modern wind power plants to comply with strict interconnection requirements. Such requirements involve various aspects of wind power plant operation, including fault ride-through and power quality performance as well as the provision of ancillary services to enhance grid reliability. During recent years, the National Renewable Energy Laboratory (NREL) of the U.S. Department of Energy has developed a new, groundbreaking testing apparatus and methodology to test and demonstrate many existing and future advanced controls for wind generation (and other renewable generation technologies) on the multimegawatt scale andmore » medium-voltage levels. This paper describes the capabilities and control features of NREL's 7-MVA power electronic grid simulator (also called a controllable grid interface, or CGI) that enables testing many active and reactive power control features of modern wind turbine generators -- including inertial response, primary and secondary frequency responses, and voltage regulation -- under a controlled, medium-voltage grid environment. In particular, this paper focuses on the specifics of testing the balanced and unbalanced fault ride-through characteristics of wind turbine generators under simulated strong and weak medium-voltage grid conditions. In addition, this paper provides insights on the power hardware-in-the-loop feature implemented in the CGI to emulate (in real time) the conditions that might exist in various types of electric power systems under normal operations and/or contingency scenarios. Using actual test examples and simulation results, this paper describes the value of CGI as an ultimate modeling validation tool for all types of 'grid-friendly' controls by wind generation.« less

  11. Design of Center-TRACON Automation System

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Davis, Thomas J.; Green, Steven

    1993-01-01

    A system for the automated management and control of terminal area traffic, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA Ames Research Center. In a cooperative program, NASA and FAA have efforts underway to install and evaluate the system at the Denver area and Dallas/Ft. Worth area air traffic control facilities. This paper will review CTAS architecture, and automation functions as well as the integration of CTAS into the existing operational system. CTAS consists of three types of integrated tools that provide computer-generated advisories for both en-route and terminal area controllers to guide them in managing and controlling arrival traffic efficiently. One tool, the Traffic Management Advisor (TMA), generates runway assignments, landing sequences and landing times for all arriving aircraft, including those originating from nearby feeder airports. TMA also assists in runway configuration control and flow management. Another tool, the Descent Advisor (DA), generates clearances for the en-route controllers handling arrival flows to metering gates. The DA's clearances ensure fuel-efficient and conflict free descents to the metering gates at specified crossing times. In the terminal area, the Final Approach Spacing Tool (FAST) provides heading and speed advisories that help controllers produce an accurately spaced flow of aircraft on the final approach course. Data bases consisting of several hundred aircraft performance models, airline preferred operational procedures, and a three dimensional wind model support the operation of CTAS. The first component of CTAS, the Traffic Management Advisor, is being evaluated at the Denver TRACON and the Denver Air Route Traffic Control Center. The second component, the Final Approach Spacing Tool, will be evaluated in several stages at the Dallas/Fort Worth Airport beginning in October 1993. An initial stage of the Descent Advisor tool is being prepared for testing at the Denver Center in late 1994. Operational evaluations of all three integrated CTAS tools are expected to begin at the two field sites in 1995.

  12. Smart Aquarium as Physics Learning Media for Renewable Energy

    NASA Astrophysics Data System (ADS)

    Desnita, D.; Raihanati, R.; Susanti, D.

    2018-04-01

    Smart aquarium has been developed as a learning media to visualize Micro Hydro Power Generator (MHPG). Its used aquarium water circulation system and Wind Power Generation (WPG) which generated through a wheel as a source. Its also used to teach about energy changes, circular motion and wheel connection, electromagnetic impact, and AC power circuit. The output power and system efficiency was adjusted through the adjustment of water level and wind speed. Specific targets in this research are: to achieved: (i) develop green aquarium technology that’s suitable to used as a medium of physics learning, (ii) improving quality of process and learning result at a senior high school student. Research method used development research by Borg and Gall, which includes preliminary studies, design, product development, expert validation, and product feasibility test, and vinalisation. The validation test by the expert states that props feasible to use. Limited trials conducted prove that this tool can improve students science process skills.

  13. Impact of an Information Technology-Enabled Initiative on the Quality of Prostate Multiparametric MRI Reports

    PubMed Central

    Silveira, Patricia C.; Dunne, Ruth; Sainani, Nisha I.; Lacson, Ronilda; Silverman, Stuart G.; Tempany, Clare M.; Khorasani, Ramin

    2015-01-01

    Rationale and Objectives Assess the impact of implementing a structured report template and a computer-aided diagnosis (CAD) tool on the quality of prostate multiparametric MRI (mp-MRI) reports. Materials and Methods Institutional Review Board approval was obtained for this HIPAA-compliant study performed at an academic medical center. The study cohort included all prostate mp-MRI reports (n=385) finalized 6 months before and after implementation of a structured report template and a CAD tool (collectively the IT tools) integrated into the PACS workstation. Primary outcome measure was quality of prostate mp-MRI reports. An expert panel of our institution’s subspecialty trained abdominal radiologists defined prostate mp-MRI report quality as optimal, satisfactory or unsatisfactory based on documentation of 9 variables. Reports were reviewed to extract the predefined quality variables and determine whether the IT tools were used to create each report. Chi-square and Student’s t-tests were used to compare report quality before and after implementation of IT tools. Results The overall proportion of optimal or satisfactory reports increased from 29.8% (47/158) to 53.3% (121/227) (p<0.001) after implementing the IT tools. While the proportion of optimal or satisfactory reports increased among reports generated using at least one of the IT tools (47/158=[29.8%] vs. 105/161=[65.2%]; p<0.001), there was no change in quality among reports generated without use of the IT tools (47/158=[29.8%] vs. 16/66=[24.2%]; p=0.404). Conclusion The use of a structured template and CAD tool improved the quality of prostate mp-MRI reports compared to free-text report format and subjective measurement of contrast enhancement kinetic curve. PMID:25863794

  14. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  15. miRCat2: accurate prediction of plant and animal microRNAs from next-generation sequencing datasets

    PubMed Central

    Paicu, Claudia; Mohorianu, Irina; Stocks, Matthew; Xu, Ping; Coince, Aurore; Billmeier, Martina; Dalmay, Tamas; Moulton, Vincent; Moxon, Simon

    2017-01-01

    Abstract Motivation MicroRNAs are a class of ∼21–22 nt small RNAs which are excised from a stable hairpin-like secondary structure. They have important gene regulatory functions and are involved in many pathways including developmental timing, organogenesis and development in eukaryotes. There are several computational tools for miRNA detection from next-generation sequencing datasets. However, many of these tools suffer from high false positive and false negative rates. Here we present a novel miRNA prediction algorithm, miRCat2. miRCat2 incorporates a new entropy-based approach to detect miRNA loci, which is designed to cope with the high sequencing depth of current next-generation sequencing datasets. It has a user-friendly interface and produces graphical representations of the hairpin structure and plots depicting the alignment of sequences on the secondary structure. Results We test miRCat2 on a number of animal and plant datasets and present a comparative analysis with miRCat, miRDeep2, miRPlant and miReap. We also use mutants in the miRNA biogenesis pathway to evaluate the predictions of these tools. Results indicate that miRCat2 has an improved accuracy compared with other methods tested. Moreover, miRCat2 predicts several new miRNAs that are differentially expressed in wild-type versus mutants in the miRNA biogenesis pathway. Availability and Implementation miRCat2 is part of the UEA small RNA Workbench and is freely available from http://srna-workbench.cmp.uea.ac.uk/. Contact v.moulton@uea.ac.uk or s.moxon@uea.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:28407097

  16. Measuring Up: Online Technology Assessment Tools Ease the Teacher's Burden and Help Students Learn

    ERIC Educational Resources Information Center

    Roland, Jennifer

    2006-01-01

    Standards are a reality in all academic disciplines, and they can be hard to measure using conventional methods. Technology skills in particular are hard to assess using multiple-choice, paper-based tests. A new generation of online assessments of student technology skills allows students to prove proficiency by completing tasks in their natural…

  17. Design, analysis and testing of a new piezoelectric tool actuator for elliptical vibration turning

    NASA Astrophysics Data System (ADS)

    Lin, Jieqiong; Han, Jinguo; Lu, Mingming; Yu, Baojun; Gu, Yan

    2017-08-01

    A new piezoelectric tool actuator (PETA) for elliptical vibration turning has been developed based on a hybrid flexure hinge connection. Two double parallel four-bar linkage mechanisms and two right circular flexure hinges were chosen to guide the motion. The two input displacement directional stiffness were modeled according to the principle of virtual work modeling method and the kinematic analysis was conducted theoretically. Finite element analysis was used to carry out static and dynamic analyses. To evaluate the performance of the developed PETA, off-line experimental tests were carried out to investigate the step responses, motion strokes, resolutions, parasitic motions, and natural frequencies of the PETA along the two input directions. The relationship between input displacement and output displacement, as well as the tool tip’s elliptical trajectory in different phase shifts was analyzed. By using the developed PETA mechanism, micro-dimple patterns were generated as the preliminary application to demonstrate the feasibility and efficiency of PETA for elliptical vibration turning.

  18. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 2

    DTIC Science & Technology

    1982-11-01

    groups. The Air Force is concerned with such issues such as resource allocation to foster and prcomotc standards, transitioning from current to future...perform automatic resource allocation , generate MATE Intermediate code, and provide formatted output listings. d. MATE Test Executive (MTE). The MTE...AFFECTED BY THESE STANDARDS TO KNOW JUST WHAT IS AVAILABLE TO SUPPORT THEM: THE HARDWARE; THE COMPLIANCE TESTING ; THE TOOLS NECESSARY TO FACILITATE DESIGN

  19. OpenSHS: Open Smart Home Simulator.

    PubMed

    Alshammari, Nasser; Alshammari, Talal; Sedky, Mohamed; Champion, Justin; Bauer, Carolin

    2017-05-02

    This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS).

  20. OpenSHS: Open Smart Home Simulator

    PubMed Central

    Alshammari, Nasser; Alshammari, Talal; Sedky, Mohamed; Champion, Justin; Bauer, Carolin

    2017-01-01

    This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS). PMID:28468330

  1. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  2. Embryonic stem cells and the next generation of developmental toxicity testing.

    PubMed

    Kugler, Josephine; Huhse, Bettina; Tralau, Tewes; Luch, Andreas

    2017-08-01

    The advent of stem cell technology has seen the establishment of embryonic stem cells (ESCs) as molecular model systems and screening tools. Although ESCs are nowadays widely used in research, regulatory implementation for developmental toxicity testing is pending. Areas Covered: This review evaluates the performance of current ESC, including human (h)ESC testing systems, trying to elucidate their potential for developmental toxicity testing. It shall discuss defining parameters and mechanisms, their relevance and contemplate what can realistically be expected. Crucially this includes the question of how to ascertain the quality of currently employed cell lines and tests based thereon. Finally, the use of hESCs will raise ethical concerns which should be addressed early on. Expert Opinion: While the suitability of (h)ESCs as tools for research and development goes undisputed, any routine use for developmental toxicity testing currently still seems premature. The reasons for this comprise inherent biological deficiencies as well as cell line quality and system validation. Overcoming these issues will require collaboration of scientists, test developers and regulators. Also, validation needs to be made worthwhile for academia. Finally we have to continuously rethink existing strategies, making room for improved testing and innovative approaches.

  3. A visual analytics approach for pattern-recognition in patient-generated data.

    PubMed

    Feller, Daniel J; Burgermaster, Marissa; Levine, Matthew E; Smaldone, Arlene; Davidson, Patricia G; Albers, David J; Mamykina, Lena

    2018-06-13

    To develop and test a visual analytics tool to help clinicians identify systematic and clinically meaningful patterns in patient-generated data (PGD) while decreasing perceived information overload. Participatory design was used to develop Glucolyzer, an interactive tool featuring hierarchical clustering and a heatmap visualization to help registered dietitians (RDs) identify associative patterns between blood glucose levels and per-meal macronutrient composition for individuals with type 2 diabetes (T2DM). Ten RDs participated in a within-subjects experiment to compare Glucolyzer to a static logbook format. For each representation, participants had 25 minutes to examine 1 month of diabetes self-monitoring data captured by an individual with T2DM and identify clinically meaningful patterns. We compared the quality and accuracy of the observations generated using each representation. Participants generated 50% more observations when using Glucolyzer (98) than when using the logbook format (64) without any loss in accuracy (69% accuracy vs 62%, respectively, p = .17). Participants identified more observations that included ingredients other than carbohydrates using Glucolyzer (36% vs 16%, p = .027). Fewer RDs reported feelings of information overload using Glucolyzer compared to the logbook format. Study participants displayed variable acceptance of hierarchical clustering. Visual analytics have the potential to mitigate provider concerns about the volume of self-monitoring data. Glucolyzer helped dietitians identify meaningful patterns in self-monitoring data without incurring perceived information overload. Future studies should assess whether similar tools can support clinicians in personalizing behavioral interventions that improve patient outcomes.

  4. Fault tolerant testbed evaluation, phase 1

    NASA Technical Reports Server (NTRS)

    Caluori, V., Jr.; Newberry, T.

    1993-01-01

    In recent years, avionics systems development costs have become the driving factor in the development of space systems, military aircraft, and commercial aircraft. A method of reducing avionics development costs is to utilize state-of-the-art software application generator (autocode) tools and methods. The recent maturity of application generator technology has the potential to dramatically reduce development costs by eliminating software development steps that have historically introduced errors and the need for re-work. Application generator tools have been demonstrated to be an effective method for autocoding non-redundant, relatively low-rate input/output (I/O) applications on the Space Station Freedom (SSF) program; however, they have not been demonstrated for fault tolerant, high-rate I/O, flight critical environments. This contract will evaluate the use of application generators in these harsh environments. Using Boeing's quad-redundant avionics system controller as the target system, Space Shuttle Guidance, Navigation, and Control (GN&C) software will be autocoded, tested, and evaluated in the Johnson (Space Center) Avionics Engineering Laboratory (JAEL). The response of the autocoded system will be shown to match the response of the existing Shuttle General Purpose Computers (GPC's), thereby demonstrating the viability of using autocode techniques in the development of future avionics systems.

  5. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  6. Summary of CPAS EDU Testing Analysis Results

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Bledsoe, Kristin J.; Davidson, John.; Engert, Meagan E.; Fraire, Usbaldo, Jr.; Galaviz, Fernando S.; Galvin, Patrick J.; Ray, Eric S.; Varela, Jose

    2015-01-01

    The Orion program's Capsule Parachute Assembly System (CPAS) project is currently conducting its third generation of testing, the Engineering Development Unit (EDU) series. This series utilizes two test articles, a dart-shaped Parachute Compartment Drop Test Vehicle (PCDTV) and capsule-shaped Parachute Test Vehicle (PTV), both of which include a full size, flight-like parachute system and require a pallet delivery system for aircraft extraction. To date, 15 tests have been completed, including six with PCDTVs and nine with PTVs. Two of the PTV tests included the Forward Bay Cover (FBC) provided by Lockheed Martin. Advancements in modeling techniques applicable to parachute fly-out, vehicle rate of descent, torque, and load train, also occurred during the EDU testing series. An upgrade from a composite to an independent parachute simulation allowed parachute modeling at a higher level of fidelity than during previous generations. The complexity of separating the test vehicles from their pallet delivery systems necessitated the use the Automatic Dynamic Analysis of Mechanical Systems (ADAMS) simulator for modeling mated vehicle aircraft extraction and separation. This paper gives an overview of each EDU test and summarizes the development of CPAS analysis tools and techniques during EDU testing.

  7. Generation After Next Propulsor Research: Robust Design for Embedded Engine Systems

    NASA Technical Reports Server (NTRS)

    Arend, David J.; Tillman, Gregory; O'Brien, Walter F.

    2012-01-01

    The National Aeronautics and Space Administration, United Technologies Research Center and Virginia Polytechnic and State University have contracted to pursue multi-disciplinary research into boundary layer ingesting (BLI) propulsors for generation after next environmentally responsible subsonic fixed wing aircraft. This Robust Design for Embedded Engine Systems project first conducted a high-level vehicle system study based on a large commercial transport class hybrid wing body aircraft, which determined that a 3 to 5 percent reduction in fuel burn could be achieved over a 7,500 nanometer mission. Both pylon-mounted baseline and BLI propulsion systems were based on a low-pressure-ratio fan (1.35) in an ultra-high-bypass ratio engine (16), consistent with the next generation of advanced commercial turbofans. An optimized, coupled BLI inlet and fan system was subsequently designed to achieve performance targets identified in the system study. The resulting system possesses an inlet with total pressure losses less than 0.5%, and a fan stage with an efficiency debit of less than 1.5 percent relative to the pylon-mounted, clean-inflow baseline. The subject research project has identified tools and methodologies necessary for the design of next-generation, highly-airframe-integrated propulsion systems. These tools will be validated in future large-scale testing of the BLI inlet / fan system in NASA's 8 foot x 6 foot transonic wind tunnel. In addition, fan unsteady response to screen-generated total pressure distortion is being characterized experimentally in a JT15D engine test rig. These data will document engine sensitivities to distortion magnitude and spatial distribution, providing early insight into key physical processes that will control BLI propulsor design.

  8. A comprehensive evaluation of assembly scaffolding tools

    PubMed Central

    2014-01-01

    Background Genome assembly is typically a two-stage process: contig assembly followed by the use of paired sequencing reads to join contigs into scaffolds. Scaffolds are usually the focus of reported assembly statistics; longer scaffolds greatly facilitate the use of genome sequences in downstream analyses, and it is appealing to present larger numbers as metrics of assembly performance. However, scaffolds are highly prone to errors, especially when generated using short reads, which can directly result in inflated assembly statistics. Results Here we provide the first independent evaluation of scaffolding tools for second-generation sequencing data. We find large variations in the quality of results depending on the tool and dataset used. Even extremely simple test cases of perfect input, constructed to elucidate the behaviour of each algorithm, produced some surprising results. We further dissect the performance of the scaffolders using real and simulated sequencing data derived from the genomes of Staphylococcus aureus, Rhodobacter sphaeroides, Plasmodium falciparum and Homo sapiens. The results from simulated data are of high quality, with several of the tools producing perfect output. However, at least 10% of joins remains unidentified when using real data. Conclusions The scaffolders vary in their usability, speed and number of correct and missed joins made between contigs. Results from real data highlight opportunities for further improvements of the tools. Overall, SGA, SOPRA and SSPACE generally outperform the other tools on our datasets. However, the quality of the results is highly dependent on the read mapper and genome complexity. PMID:24581555

  9. Clinical instruments: reliability and validity critical appraisal.

    PubMed

    Brink, Yolandi; Louw, Quinette A

    2012-12-01

    RATIONALE, AIM AND OBJECTIVES: There is a lack of health care practitioners using objective clinical tools with sound psychometric properties. There is also a need for researchers to improve their reporting of the validity and reliability results of these clinical tools. Therefore, to promote the use of valid and reliable tools or tests for clinical evaluation, this paper reports on the development of a critical appraisal tool to assess the psychometric properties of objective clinical tools. A five-step process was followed to develop the new critical appraisal tool: (1) preliminary conceptual decisions; (2) defining key concepts; (3) item generation; (4) assessment of face validity; and (5) formulation of the final tool. The new critical appraisal tool consists of 13 items, of which five items relate to both validity and reliability studies, four items to validity studies only and four items to reliability studies. The 13 items could be scored as 'yes', 'no' or 'not applicable'. This critical appraisal tool will aid both the health care practitioner to critically appraise the relevant literature and researchers to improve the quality of reporting of the validity and reliability of objective clinical tools. © 2011 Blackwell Publishing Ltd.

  10. A new optimization tool path planning for 3-axis end milling of free-form surfaces based on efficient machining intervals

    NASA Astrophysics Data System (ADS)

    Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter

    2018-05-01

    A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the efficiency of the method.

  11. SU-F-BRA-12: End-User Oriented Tools and Procedures for Testing Brachytherapy TPSs Employing MBDCAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppa, V; Pappas, E; Lahanas, V

    2015-06-15

    Purpose: To develop user-oriented tools for commissioning and dosimetry testing of {sup 192}Ir brachytherapy treatment planning systems (TPSs) employing model based dose calculation algorithms (MBDCAs). Methods: A software tool (BrachyGuide) has been developed for the automatic generation of MCNP6 input files from any CT based plan exported in DICOM RT format from Elekta and Varian TPSs. BrachyGuide also facilitates the evaluation of imported Monte Carlo (MC) and TPS dose distributions in terms of % dose differences and gamma index (CT overlaid colormaps or relative frequency plots) as well as DVHs and related indices. For users not equipped to perform MC,more » a set of computational models was prepared in DICOM format, accompanied by treatment plans and corresponding MCNP6 generated reference data. BrachyGuide can then be used to compare institutional and reference data as per TG186. The model set includes a water sphere with the MBDCA WG {sup 192}Ir source placed centrically and in two eccentric positions, a water sphere with cubic bone and lung inhomogeneities and a five source dwells plan, and a patient equivalent model with an Accelerated Partial Breast Irradiation (APBI) plan. Results: The tools developed were used for the dosimetry testing of the Acuros and ACE MBDCAs implemented in BrachyVision v.13 and Oncentra Brachy v.4.5, respectively. Findings were consistent with previous results in the literature. Besides points close to the source dwells, Acuros was found to agree within type A uncertainties with the reference MC results. Differences greater than MC type A uncertainty were observed for ACE at distances >5cm from the source dwells and in bone. Conclusion: The tools developed are efficient for brachytherapy MBDCA planning commissioning and testing. Since they are appropriate for distribution over the web, they will be put at the AAPM WG MBDCA’s disposal. Research co-financed by the ESF and Greek funds. NSRF operational Program: Education and Lifelong Learning Investing in Knowledge Society-Aristeia. Varian Medical Systems and Nucletron, an Elekta company provided access to TPSs for research purposes. Miss Peppa was supported by IKY-fellowships of excellence for postgraduate studies in Greece,Siemens Program.« less

  12. Combining Archetypes, Ontologies and Formalization Enables Automated Computation of Quality Indicators.

    PubMed

    Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald

    2017-01-01

    ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.

  13. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  14. Diagnosing schistosomiasis: where are we?

    PubMed

    Gomes, Luciana Inácia; Enk, Martin Johannes; Rabello, Ana

    2014-01-01

    In light of the World Health Organization's initiative to extend schistosomiasis morbidity and mortality control programs by including a disease elimination strategy in low endemic settings, this paper reviews diagnostic tools described during the last decades and provide an overview of ongoing efforts in making an efficient diagnostic tool available worldwide. A literature search on PubMed using the search criteria schistosomiasis and diagnosis within the period from 1978 to 2013 was carried out. Articles with abstract in English and that used laboratory techniques specifically developed for the detection of schistosomiasis in humans were included. Publications were categorized according to the methodology applied (parasitological, immunological, or molecular) and stage of development (in house development, limited field, or large scale field testing). The initial research generated 4,535 publications, of which only 643 met the inclusion criteria. The vast majority (537) of the publications focused on immunological techniques; 81 focused on parasitological diagnosis, and 25 focused on molecular diagnostic methods. Regarding the stage of development, 307 papers referred to in-house development, 202 referred to limited field tests, and 134 referred to large scale field testing. The data obtained show that promising new diagnostic tools, especially for Schistosoma antigen and deoxyribonucleic acid (DNA) detection, which are characterized by high sensitivity and specificity, are being developed. In combination with international funding initiatives these tools may result in a significant step forward in successful disease elimination and surveillance, which is to make efficient tests accessible and its large use self-sustainable for control programs in endemic countries.

  15. Generation de chemins de couverture pour des operations automatisees de controle non destructif appliquees dans l'industrie aerospatiale

    NASA Astrophysics Data System (ADS)

    Olivieri, Pierre

    Non destructive testing (NDT) plays an important role in the aerospace industry during the fabrication and maintenance of the structures built and is used, among other useful applications, to detect flaws such as cracks at an early stage. However, NDT techniques are still mainly done manually, especially on complex aeronautical structures, which then results in several drawbacks. In addition to be difficult and time-consuming, reliability and repeatability of inspection results are likely to be affected, since they rely on each operator's experience and dexterity. The present thesis is part of a larger project (MANU-418) of the Consortium for Research and Innovation in Aerospace in Quebec (CRIAQ). In this project, it has been proposed to develop a system using a 6-DOF manipulator arm to automate three particular NDT techniques often needed in the aerospace industry: eddy current testing (ECT), fluorescent penetrant inspection (FPI), and infrared thermography (IRT). The main objective of the MANU-418 project is to demonstrate the efficiency of the developed system and provide inspection results of surface and near surface flaws (cracks usually) at least as reliably and repeatably as inspection results from a human operator. One specific objective stemming from the main objective of the project is to develop a methodology and a software tool to generate covering paths adapted for the three aforementioned NDT techniques to inspect the complex surfaces of aerospace structures. The present thesis aims at reaching this specific objective. At first, geometrical and topological properties of the surfaces considered in this project are defined (flat surfaces, round and straight edges, cylindrical or near cylindrical surfaces, holes). It is also assumed that the 3D model of the surface to inspect is known in advance. Moreover, it has been decided within the framework of the MANU-418 project to give priority to the automation of ECT compared with the other techniques (FPI and IRT). As a result, the methodology developed to generate inspection paths is more closely focused on path constraints relative to the manual operations of ECT using a differential eddy current probe (named here EC probe), but it is developed to be flexible enough to be used with the other techniques as well. Common inspection paths for ECT are usually defined by a sweeping motion using a zigzag pattern with the EC probe in mild contact with the inspected surface. Moreover, the main axis of the probe must keep a normal orientation with the surface, and the alignment of its two coils must always be oriented along the direction of its motion. A first methodology is then proposed to generate covering paths on the whole surface of interest while meeting all EC probe motion constraints. First, the surface is meshed with triangular facets, and then it is subdivided into several patches such that their geometry and topology are simpler than the whole surface. Paths are then generated on each patch by intersecting their facets with offset section planes defined along a sweeping direction. Furthermore, another methodology is developed to generate paths around an indication (namely a small area where the presence of a flaw is suspected) whose position and orientation are assumed to be known a priori.. Then, a software tool with a graphical user interface has been developed in the MATLAB environment to generate inspection paths based on these methodologies. A set of path parameters can be changed by the user to get desired paths (distance between passes, sweep direction, etc.). Once paths are computed, an ordered list of coordinates (positions and orientations) of the tool is exported in an EXCEL spreadsheet so that it could be used with a real robot. In this research, these data are then used to perform simulations of trajectories (path described as a function of the time) with a MotoMan robot (model SV3XL) using the MotoSim software. After validation of these trajectories in this software (absence of collisions, positions are all reachable, etc.), they are finally converted into instructions for the real MotoMan robot to proceed with experimental tests. These first simulations and experimentations on a MotoMan robot of the generated paths have given results close to the expected inspection trajectories used manually in the NDT techniques considered, especially for the ECT technique. Nevertheless, it is strongly recommended to validate this path generation method with more experimental tests. For instance, a "test" tool could be manufactured to measure errors of position and orientation of this tool with respect to expected trajectories on a typical complex aeronautical structure. (Abstract shortened by UMI.).

  16. Nanocoatings for High-Efficiency Industrial and Tooling Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blau, P; Qu, J.; Higdon, C.

    This industry-driven project was the result of a successful response by Eaton Corporation to a DOE/ITP Program industry call. It consisted of three phases in which ORNL participated. In addition to Eaton Corporation and ORNL (CRADA), the project team included Ames Laboratory, who developed the underlying concept for aluminum-magnesium-boron based nanocomposite coatings [1], and Greenleaf, a small tooling manufacturer in western Pennsylvania. This report focuses on the portion of this work that was conducted by ORNL in a CRADA with Eaton Corporation. A comprehensive final report for the entire effort, which ended in September 2010, has been prepared by Eatonmore » Corporation. Phase I, “Proof of Concept” ran for one year (September 1, 2006 to September 30, 2007) during which the applicability of AlMgB14 single-phase and nanocomposite coatings on hydraulic material coupons and components as well as on tool inserts was demonstrated.. The coating processes used either plasma laser deposition (PLD) or physical vapor deposition (PVD). During Phase I, ORNL conducted laboratory-scale pin-on-disk and reciprocating pin-on-flat tests of coatings produced by PLD and PVD. Non-coated M2 tool steel was used as a baseline for comparison, and the material for the sliding counterface was Type 52100 bearing steel since it simulated the pump materials. Initial tests were run mainly in a commercial hydraulic fluid named Mobil DTE-24, but some tests were later run in a water-glycol mixture as well. A tribosystem analysis was conducted to define the operating conditions of pump components and to help develop simulative tests in Phase II. Phase II, “Coating Process Scale-up” was intended to use scaled-up process to generate prototype parts. This involved both PLD practices at Ames Lab, and a PVD scale-up study at Eaton using its production capable equipment. There was also a limited scale-up study at Greenleaf for the tooling application. ORNL continued to conduct friction and wear tests on process variants and developed tests to better simulate the applications of interest. ORNL also employed existing lubrication models to better understand hydraulic pump frictional behavior and test results. Phase III, “Functional Testing” focused on finalizing the strategy for commercialization of AlMgB14 coatings for both hydraulic and tooling systems. ORNL continued to provide tribology testing and analysis support for hydraulic pump applications. It included both laboratory-scale coupon testing and the analysis of friction and wear data from full component-level tests performed at Eaton Corp. Laboratory-scale tribology test methods are used to characterize the behavior of nanocomposite coatings prior to running them in full-sized hydraulic pumps. This task also includes developing tribosystems analyses, both to provide a better understanding of the performance of coated surfaces in alternate hydraulic fluids, and to help design useful laboratory protocols. Analysis also includes modeling the lubrication conditions and identifying the physical processes by which wear and friction of the contact interface changes over time. This final report summarizes ORNL’s portion of the nanocomposite coatings development effort and presents both generated data and the analyses that were used in the course of this effort.« less

  17. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1975-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.

  18. GOES-R AWG GLM Val Tool Development

    NASA Technical Reports Server (NTRS)

    Bateman, Monte; Mach, Douglas; Goodman, Steve; Blakeslee, Richard; Koshak, William

    2012-01-01

    We are developing tools needed to enable the validation of the Geostationary Lightning Mapper (GLM). In order to develop and test these tools, we have need of a robust, high-fidelity set of GLM proxy data. Many steps have been taken to ensure that the proxy data are high quality. LIS is the closest analog that exists for GLM, so it has been used extensively in developing the GLM proxy. We have verified the proxy data both statistically and algorithmically. The proxy data are pixel (event) data, called Level 1B. These data were then clustered into flashes by the Lightning Cluster-Filter Algorithm (LCFA), generating proxy Level 2 data. These were then compared with the data used to generate the proxy, and both the proxy data and the LCFA were validated. We have developed tools to allow us to visualize and compare the GLM proxy data with several other sources of lightning and other meteorological data (the so-called shallow-dive tool). The shallow-dive tool shows storm-level data and can ingest many different ground-based lightning detection networks, including: NLDN, LMA, WWLLN, and ENTLN. These are presented in a way such that it can be seen if the GLM is properly detecting the lightning in location and time comparable to the ground-based networks. Currently in development is the deep-dive tool, which will allow us to dive into the GLM data, down to flash, group and event level. This will allow us to assess performance in comparison with other data sources, and tell us if there are detection, timing, or geolocation problems. These tools will be compatible with the GLM Level-2 data format, so they can be used beginning on Day 0.

  19. Proceedings of the Workshop on software tools for distributed intelligent control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herget, C.J.

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less

  20. Demonstration of lithography patterns using reflective e-beam direct write

    NASA Astrophysics Data System (ADS)

    Freed, Regina; Sun, Jeff; Brodie, Alan; Petric, Paul; McCord, Mark; Ronse, Kurt; Haspeslagh, Luc; Vereecke, Bart

    2011-04-01

    Traditionally, e-beam direct write lithography has been too slow for most lithography applications. E-beam direct write lithography has been used for mask writing rather than wafer processing since the maximum blur requirements limit column beam current - which drives e-beam throughput. To print small features and a fine pitch with an e-beam tool requires a sacrifice in processing time unless one significantly increases the total number of beams on a single writing tool. Because of the uncertainty with regards to the optical lithography roadmap beyond the 22 nm technology node, the semiconductor equipment industry is in the process of designing and testing e-beam lithography tools with the potential for high volume wafer processing. For this work, we report on the development and current status of a new maskless, direct write e-beam lithography tool which has the potential for high volume lithography at and below the 22 nm technology node. A Reflective Electron Beam Lithography (REBL) tool is being developed for high throughput electron beam direct write maskless lithography. The system is targeting critical patterning steps at the 22 nm node and beyond at a capital cost equivalent to conventional lithography. Reflective Electron Beam Lithography incorporates a number of novel technologies to generate and expose lithographic patterns with a throughput and footprint comparable to current 193 nm immersion lithography systems. A patented, reflective electron optic or Digital Pattern Generator (DPG) enables the unique approach. The Digital Pattern Generator is a CMOS ASIC chip with an array of small, independently controllable lens elements (lenslets), which act as an array of electron mirrors. In this way, the REBL system is capable of generating the pattern to be written using massively parallel exposure by ~1 million beams at extremely high data rates (~ 1Tbps). A rotary stage concept using a rotating platen carrying multiple wafers optimizes the writing strategy of the DPG to achieve the capability of high throughput for sparse pattern wafer levels. The lens elements on the DPG are fabricated at IMEC (Leuven, Belgium) under IMEC's CMORE program. The CMOS fabricated DPG contains ~ 1,000,000 lens elements, allowing for 1,000,000 individually controllable beamlets. A single lens element consists of 5 electrodes, each of which can be set at controlled voltage levels to either absorb or reflect the electron beam. A system using a linear movable stage and the DPG integrated into the electron optics module was used to expose patterns on device representative wafers. Results of these exposure tests are discussed.

  1. A neural model of rule generation in inductive reasoning.

    PubMed

    Rasmussen, Daniel; Eliasmith, Chris

    2011-01-01

    Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The model is able to generate the rules necessary to correctly solve Raven's items, as well as recreate many of the experimental effects observed in human subjects. Copyright © 2011 Cognitive Science Society, Inc.

  2. The FaceBase Consortium: A comprehensive program to facilitate craniofacial research

    PubMed Central

    Hochheiser, Harry; Aronow, Bruce J.; Artinger, Kristin; Beaty, Terri H.; Brinkley, James F.; Chai, Yang; Clouthier, David; Cunningham, Michael L.; Dixon, Michael; Donahue, Leah Rae; Fraser, Scott E.; Hallgrimsson, Benedikt; Iwata, Junichi; Klein, Ophir; Marazita, Mary L.; Murray, Jeffrey C.; Murray, Stephen; de Villena, Fernando Pardo-Manuel; Postlethwait, John; Potter, Steven; Shapiro, Linda; Spritz, Richard; Visel, Axel; Weinberg, Seth M.; Trainor, Paul A.

    2012-01-01

    The FaceBase Consortium consists of ten interlinked research and technology projects whose goal is to generate craniofacial research data and technology for use by the research community through a central data management and integrated bioinformatics hub. Funded by the National Institute of Dental and Craniofacial Research (NIDCR) and currently focused on studying the development of the middle region of the face, the Consortium will produce comprehensive datasets of global gene expression patterns, regulatory elements and sequencing; will generate anatomical and molecular atlases; will provide human normative facial data and other phenotypes; conduct follow up studies of a completed genome-wide association study; generate independent data on the genetics of craniofacial development, build repositories of animal models and of human samples and data for community access and analysis; and will develop software tools and animal models for analyzing and functionally testing and integrating these data. The FaceBase website (http://www.facebase.org) will serve as a web home for these efforts, providing interactive tools for exploring these datasets, together with discussion forums and other services to support and foster collaboration within the craniofacial research community. PMID:21458441

  3. STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation

    PubMed Central

    2013-01-01

    Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969

  4. End-user satisfaction of a patient education tool manual versus computer-generated tool.

    PubMed

    Tronni, C; Welebob, E

    1996-01-01

    This article reports a nonexperimental comparative study of end-user satisfaction before and after implementation of a vendor supplied computerized system (Micromedex, Inc) for providing up-to-date patient instructions regarding diseases, injuries, procedures, and medications. The purpose of this research was to measure the satisfaction of nurses who directly interact with a specific patient educational software application and to compare user satisfaction with manual versus computer generated materials. A computing satisfaction questionnaire that uses a scale of 1 to 5 (1 being the lowest) was used to measure end-user computing satisfaction in five constructs: content, accuracy, format, ease of use, and timeliness. Summary statistics were used to calculate mean ratings for each of the questionnaire's 12 items and for each of the five constructs. Mean differences between the ratings before and after implementation of the five constructs were significant by paired t test. Total user satisfaction improved with the computerized system, and the computer generated materials were given a higher rating than were the manual materials. Implications of these findings are discussed.

  5. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  6. On transform coding tools under development for VP10

    NASA Astrophysics Data System (ADS)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  7. Integrating satellite imagery with simulation modeling to improve burn severity mapping

    Treesearch

    Eva C. Karau; Pamela G. Sikkink; Robert E. Keane; Gregory K. Dillon

    2014-01-01

    Both satellite imagery and spatial fire effects models are valuable tools for generating burn severity maps that are useful to fire scientists and resource managers. The purpose of this study was to test a new mapping approach that integrates imagery and modeling to create more accurate burn severity maps. We developed and assessed a statistical model that combines the...

  8. Aerodynamics of Race Cars

    NASA Astrophysics Data System (ADS)

    Katz, Joseph

    2006-01-01

    Race car performance depends on elements such as the engine, tires, suspension, road, aerodynamics, and of course the driver. In recent years, however, vehicle aerodynamics gained increased attention, mainly due to the utilization of the negative lift (downforce) principle, yielding several important performance improvements. This review briefly explains the significance of the aerodynamic downforce and how it improves race car performance. After this short introduction various methods to generate downforce such as inverted wings, diffusers, and vortex generators are discussed. Due to the complex geometry of these vehicles, the aerodynamic interaction between the various body components is significant, resulting in vortex flows and lifting surface shapes unlike traditional airplane wings. Typical design tools such as wind tunnel testing, computational fluid dynamics, and track testing, and their relevance to race car development, are discussed as well. In spite of the tremendous progress of these design tools (due to better instrumentation, communication, and computational power), the fluid dynamic phenomenon is still highly nonlinear, and predicting the effect of a particular modification is not always trouble free. Several examples covering a wide range of vehicle shapes (e.g., from stock cars to open-wheel race cars) are presented to demonstrate this nonlinear nature of the flow field.

  9. Method for Vibration Response Simulation and Sensor Placement Optimization of a Machine Tool Spindle System with a Bearing Defect

    PubMed Central

    Cao, Hongrui; Niu, Linkai; He, Zhengjia

    2012-01-01

    Bearing defects are one of the most important mechanical sources for vibration and noise generation in machine tool spindles. In this study, an integrated finite element (FE) model is proposed to predict the vibration responses of a spindle bearing system with localized bearing defects and then the sensor placement for better detection of bearing faults is optimized. A nonlinear bearing model is developed based on Jones' bearing theory, while the drawbar, shaft and housing are modeled as Timoshenko's beam. The bearing model is then integrated into the FE model of drawbar/shaft/housing by assembling equations of motion. The Newmark time integration method is used to solve the vibration responses numerically. The FE model of the spindle-bearing system was verified by conducting dynamic tests. Then, the localized bearing defects were modeled and vibration responses generated by the outer ring defect were simulated as an illustration. The optimization scheme of the sensor placement was carried out on the test spindle. The results proved that, the optimal sensor placement depends on the vibration modes under different boundary conditions and the transfer path between the excitation and the response. PMID:23012514

  10. New real-time algorithms for arbitrary, high precision function generation with applications to acoustic transducer excitation

    NASA Astrophysics Data System (ADS)

    Gaydecki, P.

    2009-07-01

    A system is described for the design, downloading and execution of arbitrary functions, intended for use with acoustic and low-frequency ultrasonic transducers in condition monitoring and materials testing applications. The instrumentation comprises a software design tool and a powerful real-time digital signal processor unit, operating at 580 million multiplication-accumulations per second (MMACs). The embedded firmware employs both an established look-up table approach and a new function interpolation technique to generate the real-time signals with very high precision and flexibility. Using total harmonic distortion (THD) analysis, the purity of the waveforms have been compared with those generated using traditional analogue function generators; this analysis has confirmed that the new instrument has a consistently superior signal-to-noise ratio.

  11. An integrated tool for loop calculations: AITALC

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Riemann, Tord

    2006-01-01

    AITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples. Program summaryTitle of the program:AITALC version 1.2.1 (9 August 2005) Catalogue identifier:ADWO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWO Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC i386 Operating system:GNU/ LINUX, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SOLARIS Programming language used:GNU MAKE, DIANA, FORM, FORTRAN77 Additional programs/libraries used:DIANA 2.35 ( QGRAF 2.0), FORM 3.1, LOOPTOOLS 2.1 ( FF) Memory required to execute with typical data:Up to about 10 MB No. of processors used:1 No. of lines in distributed program, including test data, etc.:40 926 No. of bytes in distributed program, including test data, etc.:371 424 Distribution format:tar gzip file High-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examples Nature of the physical problem:Calculation of differential cross sections for ee annihilation in one-loop approximation. Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors. Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model. Typical running time:Few minutes, being highly depending on the complexity of the process and the FORTRAN compiler.

  12. SU-E-J-127: Implementation of An Online Replanning Tool for VMAT Using Flattening Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ates, O; Ahunbay, E; Li, X

    2015-06-15

    Purpose: This is to report the implementation of an online replanning tool based on segment aperture morphing (SAM) for VMAT with flattening filter free (FFF) beams. Methods: Previously reported SAM algorithm modified to accommodate VMAT with FFF beams was implemented in a tool that was interfaced with a treatment planning system (Monaco, Elekta). The tool allows (1) to output the beam parameters of the original VMAT plan from Monaco, and (2) to input the apertures generated from the SAM algorithm into Monaco for the dose calculation on daily CT/CBCT/MRI in the following steps:(1) Quickly generating target contour based on themore » image of the day, using an auto-segmentation tool (ADMIRE, Elekta) with manual editing if necessary; (2) Morphing apertures based on the SAM in the original VMAT plan to account for the interfractional change of the target from the planning to the daily images; (3) Calculating dose distribution for new apertures with the same numbers of MU as in the original plan; (4) Transferring the new plan into a record & verify system (MOSAIQ, Elekta); (5) Performing a pre-delivery QA based on software; (6) Delivering the adaptive plan for the fraction.This workflow was implemented on a 16-CPU (2.6 GHz dual-core) hardware with GPU and was tested for sample cases of prostate, pancreas and lung tumors. Results: The online replanning process can be completed within 10 minutes. The adaptive plans generally have improved the plan quality when compared to the IGRT repositioning plans. The adaptive plans with FFF beams have better normal tissue sparing as compared with those of FF beams. Conclusion: The online replanning tool based on SAM can quickly generate adaptive VMAT plans using FFF beams with improved plan quality than those from the IGRT repositioning plans based on daily CT/CBCT/MRI and can be used clinically. This research was supported by Elekta Inc. (Crawley, UK)« less

  13. MutScan: fast detection and visualization of target mutations by scanning FASTQ data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Wen, Tiexiang; Li, Hong; Xu, Mingyan; Gu, Jia

    2018-01-22

    Some types of clinical genetic tests, such as cancer testing using circulating tumor DNA (ctDNA), require sensitive detection of known target mutations. However, conventional next-generation sequencing (NGS) data analysis pipelines typically involve different steps of filtering, which may cause miss-detection of key mutations with low frequencies. Variant validation is also indicated for key mutations detected by bioinformatics pipelines. Typically, this process can be executed using alignment visualization tools such as IGV or GenomeBrowse. However, these tools are too heavy and therefore unsuitable for validating mutations in ultra-deep sequencing data. We developed MutScan to address problems of sensitive detection and efficient validation for target mutations. MutScan involves highly optimized string-searching algorithms, which can scan input FASTQ files to grab all reads that support target mutations. The collected supporting reads for each target mutation will be piled up and visualized using web technologies such as HTML and JavaScript. Algorithms such as rolling hash and bloom filter are applied to accelerate scanning and make MutScan applicable to detect or visualize target mutations in a very fast way. MutScan is a tool for the detection and visualization of target mutations by only scanning FASTQ raw data directly. Compared to conventional pipelines, this offers a very high performance, executing about 20 times faster, and offering maximal sensitivity since it can grab mutations with even one single supporting read. MutScan visualizes detected mutations by generating interactive pile-ups using web technologies. These can serve to validate target mutations, thus avoiding false positives. Furthermore, MutScan can visualize all mutation records in a VCF file to HTML pages for cloud-friendly VCF validation. MutScan is an open source tool available at GitHub: https://github.com/OpenGene/MutScan.

  14. Early warning and response system (EWARS) for dengue outbreaks: Recent advancements towards widespread applications in critical settings

    PubMed Central

    Kroeger, Axel; Olliaro, Piero; Rocklöv, Joacim; Sewe, Maquins Odhiambo; Tejeda, Gustavo; Benitez, David; Gill, Balvinder; Hakim, S. Lokman; Gomes Carvalho, Roberta; Bowman, Leigh; Petzold, Max

    2018-01-01

    Background Dengue outbreaks are increasing in frequency over space and time, affecting people’s health and burdening resource-constrained health systems. The ability to detect early emerging outbreaks is key to mounting an effective response. The early warning and response system (EWARS) is a toolkit that provides countries with early-warning systems for efficient and cost-effective local responses. EWARS uses outbreak and alarm indicators to derive prediction models that can be used prospectively to predict a forthcoming dengue outbreak at district level. Methods We report on the development of the EWARS tool, based on users’ recommendations into a convenient, user-friendly and reliable software aided by a user’s workbook and its field testing in 30 health districts in Brazil, Malaysia and Mexico. Findings 34 Health officers from the 30 study districts who had used the original EWARS for 7 to 10 months responded to a questionnaire with mainly open-ended questions. Qualitative content analysis showed that participants were generally satisfied with the tool but preferred open-access vs. commercial software. EWARS users also stated that the geographical unit should be the district, while access to meteorological information should be improved. These recommendations were incorporated into the second-generation EWARS-R, using the free R software, combined with recent surveillance data and resulted in higher sensitivities and positive predictive values of alarm signals compared to the first-generation EWARS. Currently the use of satellite data for meteorological information is being tested and a dashboard is being developed to increase user-friendliness of the tool. The inclusion of other Aedes borne viral diseases is under discussion. Conclusion EWARS is a pragmatic and useful tool for detecting imminent dengue outbreaks to trigger early response activities. PMID:29727447

  15. Early warning and response system (EWARS) for dengue outbreaks: Recent advancements towards widespread applications in critical settings.

    PubMed

    Hussain-Alkhateeb, Laith; Kroeger, Axel; Olliaro, Piero; Rocklöv, Joacim; Sewe, Maquins Odhiambo; Tejeda, Gustavo; Benitez, David; Gill, Balvinder; Hakim, S Lokman; Gomes Carvalho, Roberta; Bowman, Leigh; Petzold, Max

    2018-01-01

    Dengue outbreaks are increasing in frequency over space and time, affecting people's health and burdening resource-constrained health systems. The ability to detect early emerging outbreaks is key to mounting an effective response. The early warning and response system (EWARS) is a toolkit that provides countries with early-warning systems for efficient and cost-effective local responses. EWARS uses outbreak and alarm indicators to derive prediction models that can be used prospectively to predict a forthcoming dengue outbreak at district level. We report on the development of the EWARS tool, based on users' recommendations into a convenient, user-friendly and reliable software aided by a user's workbook and its field testing in 30 health districts in Brazil, Malaysia and Mexico. 34 Health officers from the 30 study districts who had used the original EWARS for 7 to 10 months responded to a questionnaire with mainly open-ended questions. Qualitative content analysis showed that participants were generally satisfied with the tool but preferred open-access vs. commercial software. EWARS users also stated that the geographical unit should be the district, while access to meteorological information should be improved. These recommendations were incorporated into the second-generation EWARS-R, using the free R software, combined with recent surveillance data and resulted in higher sensitivities and positive predictive values of alarm signals compared to the first-generation EWARS. Currently the use of satellite data for meteorological information is being tested and a dashboard is being developed to increase user-friendliness of the tool. The inclusion of other Aedes borne viral diseases is under discussion. EWARS is a pragmatic and useful tool for detecting imminent dengue outbreaks to trigger early response activities.

  16. COMPOSE-HPC: A Transformational Approach to Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernholdt, David E; Allan, Benjamin A.; Armstrong, Robert C.

    2012-04-01

    The goal of the COMPOSE-HPC project is to 'democratize' tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, whichmore » include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.« less

  17. A Whale of a Tale: Creating Spacecraft Telemetry Data Analysis Products for the Deep Impact Mission

    NASA Technical Reports Server (NTRS)

    Sturdevant, Kathryn F.; Wright, Jesse J.; Lighty, Roger A.; Nakamura, Lori L.

    2006-01-01

    This paper describes some of the challenges and lessons learned from the Deep Impact (DI) Mission Ground Data System's (GDS) telemetry data processing and product generation tool, nicknamed 'Whale.' One of the challenges of any mission is to analyze testbed and operational telemetry data. Methods to retrieve this data to date have required spacecraft subsystem members to become experts in the use of a myriad of query and plot tools. As budgets shrink, and the GDS teams grow smaller, more of the burden to understand these tools falls on the users. The user base also varies from novice to expert, and requiring them to become GDS tool experts in addition to spacecraft domain experts is an undue burden. The "Whale" approach is to process all of the data for a given spacecraft test, and provide each subsystem with plots and data products 'automagically.'.

  18. Development and psychometric validation of a scale to assess information needs in cardiac rehabilitation: the INCR Tool.

    PubMed

    Ghisi, Gabriela Lima de Melo; Grace, Sherry L; Thomas, Scott; Evans, Michael F; Oh, Paul

    2013-06-01

    To develop and psychometrically validate a tool to assess information needs in cardiac rehabilitation (CR) patients. After a literature search, 60 information items divided into 11 areas of needs were identified. To establish content validity, they were reviewed by an expert panel (N=10). Refined items were pilot-tested in 34 patients on a 5-point Likert-scale from 1 "really not helpful" to 5 "very important". A final version was generated and psychometrically tested in 203 CR patients. Test-retest reliability was assessed via the intraclass correlation coefficient (ICC), the internal consistency using Cronbach's alpha, and criterion validity was assessed with regard to patient's education and duration in CR. Five items were excluded after ICC analysis as well as one area of needs. All 10 areas were considered internally consistent (Cronbach's alpha>0.7). Criterion validity was supported by significant differences in mean scores by educational level (p<0.05) and duration in CR (p<0.001). The mean total score was 4.08 ± 0.53. Patients rated safety as their greatest information need. The INCR Tool was demonstrated to have good reliability and validity. This is an appropriate tool for application in clinical and research settings, assessing patients' needs during CR and as part of education programming. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Quantifying the Economic and Grid Reliability Impacts of Improved Wind Power Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Martinez-Anido, Carlo Brancucci; Wu, Hongyu

    Wind power forecasting is an important tool in power system operations to address variability and uncertainty. Accurately doing so is important to reducing the occurrence and length of curtailment, enhancing market efficiency, and improving the operational reliability of the bulk power system. This research quantifies the value of wind power forecasting improvements in the IEEE 118-bus test system as modified to emulate the generation mixes of Midcontinent, California, and New England independent system operator balancing authority areas. To measure the economic value, a commercially available production cost modeling tool was used to simulate the multi-timescale unit commitment (UC) and economicmore » dispatch process for calculating the cost savings and curtailment reductions. To measure the reliability improvements, an in-house tool, FESTIV, was used to calculate the system's area control error and the North American Electric Reliability Corporation Control Performance Standard 2. The approach allowed scientific reproducibility of results and cross-validation of the tools. A total of 270 scenarios were evaluated to accommodate the variation of three factors: generation mix, wind penetration level, and wind fore-casting improvements. The modified IEEE 118-bus systems utilized 1 year of data at multiple timescales, including the day-ahead UC, 4-hour-ahead UC, and 5-min real-time dispatch. The value of improved wind power forecasting was found to be strongly tied to the conventional generation mix, existence of energy storage devices, and the penetration level of wind energy. The simulation results demonstrate that wind power forecasting brings clear benefits to power system operations.« less

  20. Rapid SAW Sensor Development Tools

    NASA Technical Reports Server (NTRS)

    Wilson, William C.; Atkinson, Gary M.

    2007-01-01

    The lack of integrated design tools for Surface Acoustic Wave (SAW) devices has led us to develop tools for the design, modeling, analysis, and automatic layout generation of SAW devices. These tools enable rapid development of wireless SAW sensors. The tools developed have been designed to integrate into existing Electronic Design Automation (EDA) tools to take advantage of existing 3D modeling, and Finite Element Analysis (FEA). This paper presents the SAW design, modeling, analysis, and automated layout generation tools.

  1. Automated Generation and Assessment of Autonomous Systems Test Cases

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.

  2. Development of a qualitative indirect ELISA for the measurement of rabies virus-specific antibodies from vaccinated dogs and cats.

    PubMed

    Cliquet, F; McElhinney, L M; Servat, A; Boucher, J M; Lowings, J P; Goddard, T; Mansfield, K L; Fooks, A R

    2004-04-01

    A protocol suitable for the detection of rabies virus-specific antibodies in serum samples from companion animals using an enzyme linked immunosorbent assay (ELISA) is described. This method has been used successfully for the qualitative assessment of rabies virus-specific antibodies in serum samples from a cohort of vaccinated dogs and cats. In two initial field studies, a variable population of field samples from the Veterinary Laboratories Agency (VLA), United Kingdom were tested. In the first study (n = 1000), the number of false-positive and false-negative results was 11 samples (1.1%) and 67 samples (6.7%), respectively. In the second study (n = 920), the number of false-positive and false-negative results was 7 samples (0.8%) and 52 samples (5.7%). In a third study, undertaken at l'Agence Française de Sécurité Sanitaire des Aliments (AFSSA), Nancy, France (n = 440), 1 false-positive sample (0.23%) and 91 (20.7%) false-negative samples were identified. Data generated using this prototype ELISA indicate a strong correlation for specificity when compared to the gold standard fluorescent antibody virus neutralisation (FAVN) test. Although the ELISA has a lower sensitivity than the FAVN test, it is a useful tool for rapidly screening serum samples from vaccinated companion animals. Using a cut-off value of 0.6 EU/ml, the sensitivity (R = % from VLA and 79% from AFSSA) and specificity (R = 97.3%) indices between the ELISA compared favourably with data generated using the FAVN test. The major advantages of the ELISA test are that it is a qualitative tool that can be completed in four hours, does not require the use of live virus and can be performed without the need for specialised laboratory containment. This contrasts with 4 days using conventional rabies antibody virus neutralisation assays. Using the current format, the ELISA assay described would be a valuable screening tool for the detection of rabies antibodies from vaccinated domestic animals in combination with other Office International des Epizooties (OIE) accepted serological tests.

  3. Development and Validation of a Computational Model for Predicting the Behavior of Plumes from Large Solid Rocket Motors

    NASA Technical Reports Server (NTRS)

    Wells, Jason E.; Black, David L.; Taylor, Casey L.

    2013-01-01

    Exhaust plumes from large solid rocket motors fired at ATK's Promontory test site carry particulates to high altitudes and typically produce deposits that fall on regions downwind of the test area. As populations and communities near the test facility grow, ATK has become increasingly concerned about the impact of motor testing on those surrounding communities. To assess the potential impact of motor testing on the community and to identify feasible mitigation strategies, it is essential to have a tool capable of predicting plume behavior downrange of the test stand. A software package, called PlumeTracker, has been developed and validated at ATK for this purpose. The code is a point model that offers a time-dependent, physics-based description of plume transport and precipitation. The code can utilize either measured or forecasted weather data to generate plume predictions. Next-Generation Radar (NEXRAD) data and field observations from twenty-three historical motor test fires at Promontory were collected to test the predictive capability of PlumeTracker. Model predictions for plume trajectories and deposition fields were found to correlate well with the collected dataset.

  4. An Approach to Model Based Testing of Multiagent Systems

    PubMed Central

    Nadeem, Aamer

    2015-01-01

    Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion. PMID:25874263

  5. Relating genes to function: identifying enriched transcription factors using the ENCODE ChIP-Seq significance tool.

    PubMed

    Auerbach, Raymond K; Chen, Bin; Butte, Atul J

    2013-08-01

    Biological analysis has shifted from identifying genes and transcripts to mapping these genes and transcripts to biological functions. The ENCODE Project has generated hundreds of ChIP-Seq experiments spanning multiple transcription factors and cell lines for public use, but tools for a biomedical scientist to analyze these data are either non-existent or tailored to narrow biological questions. We present the ENCODE ChIP-Seq Significance Tool, a flexible web application leveraging public ENCODE data to identify enriched transcription factors in a gene or transcript list for comparative analyses. The ENCODE ChIP-Seq Significance Tool is written in JavaScript on the client side and has been tested on Google Chrome, Apple Safari and Mozilla Firefox browsers. Server-side scripts are written in PHP and leverage R and a MySQL database. The tool is available at http://encodeqt.stanford.edu. abutte@stanford.edu Supplementary material is available at Bioinformatics online.

  6. Individually Coded Telemetry: a Tool for Studying Heart Rate and Behaviour in Reindeer Calves

    PubMed Central

    Eloranta, E; Norberg, H; Nilsson, A; Pudas, T; Säkkinen, H

    2002-01-01

    The aim of the study was to test the performance of a silver wire modified version of the coded telemetric heart rate monitor Polar Vantage NV™ (PVNV) and to measure heart rate (HR) in a group of captive reindeer calves during different behaviour. The technical performance of PVNV HR monitors was tested in cold conditions (-30°C) using a pulse generator and the correlation between generated pulse and PVNV values was high (r = 0.9957). The accuracy was tested by comparing the HR obtained with the PVNV monitor with the standard ECG, and the correlation was significant (r = 0.9965). Both circadian HR and HR related to behavioural pattern were recorded. A circadian rhythm was observed in the HR in reindeer with a minimum during night and early morning hours and maximum at noon and during the afternoon, the average HR of the reindeer calves studied being 42.5 beats/min in February. The behaviour was recorded by focal individual observations and the data was synchronized with the output of the HR monitors. Running differed from all other behavioural categories in HR. Inter-individual differences were seen expressing individual responses to external and internal stimuli. The silver wire modified Polar Vantage NV™ provides a suitable and reliable tool for measuring heart rate in reindeer, also in natural conditions. PMID:12564543

  7. NASA Tech Briefs, September 2011

    NASA Technical Reports Server (NTRS)

    2011-01-01

    Topics covered include: Fused Reality for Enhanced Flight Test Capabilities; Thermography to Inspect Insulation of Large Cryogenic Tanks; Crush Test Abuse Stand; Test Generator for MATLAB Simulations; Dynamic Monitoring of Cleanroom Fallout Using an Air Particle Counter; Enhancement to Non-Contacting Stress Measurement of Blade Vibration Frequency; Positively Verifying Mating of Previously Unverifiable Flight Connectors; Radiation-Tolerant Intelligent Memory Stack - RTIMS; Ultra-Low-Dropout Linear Regulator; Excitation of a Parallel Plate Waveguide by an Array of Rectangular Waveguides; FPGA for Power Control of MSL Avionics; UAVSAR Active Electronically Scanned Array; Lockout/Tagout (LOTO) Simulator; Silicon Carbide Mounts for Fabry-Perot Interferometers; Measuring the In-Process Figure, Final Prescription, and System Alignment of Large; Optics and Segmented Mirrors Using Lidar Metrology; Fiber-Reinforced Reactive Nano-Epoxy Composites; Polymerization Initiated at the Sidewalls of Carbon Nanotubes; Metal-Matrix/Hollow-Ceramic-Sphere Composites; Piezoelectrically Enhanced Photocathodes; Iridium-Doped Ruthenium Oxide Catalyst for Oxygen Evolution; Improved Mo-Re VPS Alloys for High-Temperature Uses; Data Service Provider Cost Estimation Tool; Hybrid Power Management-Based Vehicle Architecture; Force Limit System; Levitated Duct Fan (LDF) Aircraft Auxiliary Generator; Compact, Two-Sided Structural Cold Plate Configuration; AN Fitting Reconditioning Tool; Active Response Gravity Offload System; Method and Apparatus for Forming Nanodroplets; Rapid Detection of the Varicella Zoster Virus in Saliva; Improved Devices for Collecting Sweat for Chemical Analysis; Phase-Controlled Magnetic Mirror for Wavefront Correction; and Frame-Transfer Gating Raman Spectroscopy for Time-Resolved Multiscalar Combustion Diagnostics.

  8. Economic Evidence and Point-of-Care Testing

    PubMed Central

    St John, Andrew; Price, Christopher P

    2013-01-01

    Health economics has been an established feature of the research, policymaking, practice and management in the delivery of healthcare. However its role is increasing as the cost of healthcare begins to drive changes in most healthcare systems. Thus the output from cost effectiveness studies is now being taken into account when making reimbursement decisions, e.g. in Australia and the United Kingdom. Against this background it is also recognised that the health economic tools employed in healthcare, and particularly the output from the use of these tools however, are not always employed in the routine delivery of services. One of the notable consequences of this situation is the poor record of innovation in healthcare with respect to the adoption of new technologies, and the realisation of their benefits. The evidence base for the effectiveness of diagnostic services is well known to be limited, and one consequence of this has been a very limited literature on cost effectiveness. One reason for this situation is undoubtedly the reimbursement strategies employed in laboratory medicine for many years, simplistically based on the complexity of the test procedure, and the delivery as a cost-per-test service. This has proved a disincentive to generate the required evidence, and little effort to generate an integrated investment and disinvestment business case, associated with care pathway changes. Point-of-care testing creates a particularly challenging scenario because, on the one hand, the unit cost-per-test is larger through the loss of the economy of scale offered by automation, whilst it offers the potential of substantial savings through enabling rapid delivery of results, and reduction of facility costs. This is important when many health systems are planning for complete system redesign. We review the literature on economic assessment of point-of-care testing in the context of these developments. PMID:24151342

  9. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  10. Mobile Phone Application Development for the Classroom

    NASA Astrophysics Data System (ADS)

    Lewis, P.; Oostra, D.; Crecelius, S.; Chambers, L. H.

    2012-08-01

    With smartphone sales currently surpassing laptop sales, it is hard not to think that these devices will have a place in the classroom. More specifically, with little to no monetary investment, classroom-centric mobile applications have the ability to suit the needs of teachers. Previously, programming such an item was a daunting task to the classroom teacher. But now, through the use of online visual tools, anyone has the ability to generate a mobile application to suit individual classroom needs. The "MY NASA DATA" (MND) project has begun work on such an application. Using online tools that are directed at the non-programmer, the team has developed two usable mobile applications ("apps") that fit right into the science classroom. The two apps generated include a cloud dichotomous key for cloud identification in the field, and an atmospheric science glossary to help with standardized testing key vocabulary and classroom assignments. Through the use of free online tools, teachers and students now have the ability to customize mobile applications to meet their individual needs. As an extension of the mobile applications, the MND team is planning web-based application programming interfaces (API's) that will be generated from data that is currently included in the MND Live Access Server. This will allow teachers and students to choose data sets that they want to include in the mobile application without having to populate the API themselves. Through the use of easy to understand online mobile app tutorials and MND data sets, teachers will have the ability to generate unit-specific mobile applications to further engage and empower students in the science classroom.

  11. Visual programming for next-generation sequencing data analytics.

    PubMed

    Milicchio, Franco; Rose, Rebecca; Bian, Jiang; Min, Jae; Prosperi, Mattia

    2016-01-01

    High-throughput or next-generation sequencing (NGS) technologies have become an established and affordable experimental framework in biological and medical sciences for all basic and translational research. Processing and analyzing NGS data is challenging. NGS data are big, heterogeneous, sparse, and error prone. Although a plethora of tools for NGS data analysis has emerged in the past decade, (i) software development is still lagging behind data generation capabilities, and (ii) there is a 'cultural' gap between the end user and the developer. Generic software template libraries specifically developed for NGS can help in dealing with the former problem, whilst coupling template libraries with visual programming may help with the latter. Here we scrutinize the state-of-the-art low-level software libraries implemented specifically for NGS and graphical tools for NGS analytics. An ideal developing environment for NGS should be modular (with a native library interface), scalable in computational methods (i.e. serial, multithread, distributed), transparent (platform-independent), interoperable (with external software interface), and usable (via an intuitive graphical user interface). These characteristics should facilitate both the run of standardized NGS pipelines and the development of new workflows based on technological advancements or users' needs. We discuss in detail the potential of a computational framework blending generic template programming and visual programming that addresses all of the current limitations. In the long term, a proper, well-developed (although not necessarily unique) software framework will bridge the current gap between data generation and hypothesis testing. This will eventually facilitate the development of novel diagnostic tools embedded in routine healthcare.

  12. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  13. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.

  14. STS-72 Flight Day 5

    NASA Technical Reports Server (NTRS)

    1996-01-01

    On this fifth day of the STS-72 mission, the flight crew, Cmdr. Brian Duffy, Pilot Brent W. Jett, and Mission Specialists Leroy Chiao, Daniel T. Barry, Winston E. Scott, and Koichi Wakata (NASDA), awakened to music from the television show, 'Star Trek: The Next Generation'. Chiao and Barry are shown suiting up for the first of the two scheduled 6 1/2 hour spacewalks and, later, conducting tests with various tools and materials from the shuttle's cargo bay during the spacewalk. The new heating and cooling units in the spacesuits will be tested during these EVAs.

  15. SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study

    PubMed Central

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-01-01

    Background With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Results Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. Conclusion SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine. PMID:16321145

  16. Next-generation sequencing of the BRCA1 and BRCA2 genes for the genetic diagnostics of hereditary breast and/or ovarian cancer.

    PubMed

    Trujillano, Daniel; Weiss, Maximilian E R; Schneider, Juliane; Köster, Julia; Papachristos, Efstathios B; Saviouk, Viatcheslav; Zakharkina, Tetyana; Nahavandi, Nahid; Kovacevic, Lejla; Rolfs, Arndt

    2015-03-01

    Genetic testing for hereditary breast and/or ovarian cancer mostly relies on laborious molecular tools that use Sanger sequencing to scan for mutations in the BRCA1 and BRCA2 genes. We explored a more efficient genetic screening strategy based on next-generation sequencing of the BRCA1 and BRCA2 genes in 210 hereditary breast and/or ovarian cancer patients. We first validated this approach in a cohort of 115 samples with previously known BRCA1 and BRCA2 mutations and polymorphisms. Genomic DNA was amplified using the Ion AmpliSeq BRCA1 and BRCA2 panel. The DNA Libraries were pooled, barcoded, and sequenced using an Ion Torrent Personal Genome Machine sequencer. The combination of different robust bioinformatics tools allowed detection of all previously known pathogenic mutations and polymorphisms in the 115 samples, without detecting spurious pathogenic calls. We then used the same assay in a discovery cohort of 95 uncharacterized hereditary breast and/or ovarian cancer patients for BRCA1 and BRCA2. In addition, we describe the allelic frequencies across 210 hereditary breast and/or ovarian cancer patients of 74 unique definitely and likely pathogenic and uncertain BRCA1 and BRCA2 variants, some of which have not been previously annotated in the public databases. Targeted next-generation sequencing is ready to substitute classic molecular methods to perform genetic testing on the BRCA1 and BRCA2 genes and provides a greater opportunity for more comprehensive testing of at-risk patients. Copyright © 2015 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  17. SLIM: an alternative Web interface for MEDLINE/PubMed searches - a preliminary study.

    PubMed

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-12-01

    With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.

  18. Geant4 hadronic physics for space radiation environment.

    PubMed

    Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L

    2012-01-01

    To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.

  19. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  20. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  1. Dust control effectiveness of drywall sanding tools.

    PubMed

    Young-Corbett, Deborah E; Nussbaum, Maury A

    2009-07-01

    In this laboratory study, four drywall sanding tools were evaluated in terms of dust generation rates in the respirable and thoracic size classes. In a repeated measures study design, 16 participants performed simulated drywall finishing tasks with each of four tools: (1) ventilated sander, (2) pole sander, (3) block sander, and (4) wet sponge. Dependent variables of interest were thoracic and respirable breathing zone dust concentrations. Analysis by Friedman's Test revealed that the ventilated drywall sanding tool produced significantly less dust, of both size classes, than did the other three tools. The pole and wet sanders produced significantly less dust of both size classes than did the block sander. The block sander, the most commonly used tool in drywall finishing operations, produced significantly more dust of both size classes than did the other three tools. When compared with the block sander, the other tools offer substantial dust reduction. The ventilated tool reduced respirable concentrations by 88% and thoracic concentrations by 85%. The pole sander reduced respirable concentrations by 58% and thoracic by 50%. The wet sander produced reductions of 60% and 47% in the respirable and thoracic classes, respectively. Wet sponge sanders and pole sanders are effective at reducing breathing-zone dust concentrations; however, based on its superior dust control effectiveness, the ventilated sander is the recommended tool for drywall finishing operations.

  2. Improving mapping and SNP-calling performance in multiplexed targeted next-generation sequencing

    PubMed Central

    2012-01-01

    Background Compared to classical genotyping, targeted next-generation sequencing (tNGS) can be custom-designed to interrogate entire genomic regions of interest, in order to detect novel as well as known variants. To bring down the per-sample cost, one approach is to pool barcoded NGS libraries before sample enrichment. Still, we lack a complete understanding of how this multiplexed tNGS approach and the varying performance of the ever-evolving analytical tools can affect the quality of variant discovery. Therefore, we evaluated the impact of different software tools and analytical approaches on the discovery of single nucleotide polymorphisms (SNPs) in multiplexed tNGS data. To generate our own test model, we combined a sequence capture method with NGS in three experimental stages of increasing complexity (E. coli genes, multiplexed E. coli, and multiplexed HapMap BRCA1/2 regions). Results We successfully enriched barcoded NGS libraries instead of genomic DNA, achieving reproducible coverage profiles (Pearson correlation coefficients of up to 0.99) across multiplexed samples, with <10% strand bias. However, the SNP calling quality was substantially affected by the choice of tools and mapping strategy. With the aim of reducing computational requirements, we compared conventional whole-genome mapping and SNP-calling with a new faster approach: target-region mapping with subsequent ‘read-backmapping’ to the whole genome to reduce the false detection rate. Consequently, we developed a combined mapping pipeline, which includes standard tools (BWA, SAMtools, etc.), and tested it on public HiSeq2000 exome data from the 1000 Genomes Project. Our pipeline saved 12 hours of run time per Hiseq2000 exome sample and detected ~5% more SNPs than the conventional whole genome approach. This suggests that more potential novel SNPs may be discovered using both approaches than with just the conventional approach. Conclusions We recommend applying our general ‘two-step’ mapping approach for more efficient SNP discovery in tNGS. Our study has also shown the benefit of computing inter-sample SNP-concordances and inspecting read alignments in order to attain more confident results. PMID:22913592

  3. What Happens to Student Learning When Color Is Added to a New Knowledge Representation Strategy? Implications from Visual Thinking Networking.

    ERIC Educational Resources Information Center

    Longo, Palma J.

    A long-term study was conducted to test the effectiveness of visual thinking networking (VTN), a new generation of knowledge representation strategies with 56 ninth grade earth science students. The recent findings about the brain's organization and processing conceptually ground VTN as a new cognitive tool used by learners when making their…

  4. Manufacturing process applications team (MATeam)

    NASA Technical Reports Server (NTRS)

    Bangs, E. R.

    1980-01-01

    The objectives and activities of an aerospace technology transfer group are outlined and programs in various stages of progress are described including the orbital tube flaring device, infrared proximity sensor for robot positioning, laser stripping magnet wire, infrared imaging as welding process tracking system, carbide coating of cutting tools, nondestructive fracture toughness testing of titanium welds, portable solar system for agricultural applications, and an anerobic methane gas generator.

  5. Experiences on developing digital down conversion algorithms using Xilinx system generator

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Yuan, Yuan; Zhao, Lizhi

    2013-07-01

    The Digital Down Conversion (DDC) algorithm is a classical signal processing method which is widely used in radar and communication systems. In this paper, the DDC function is implemented by Xilinx System Generator tool on FPGA. System Generator is an FPGA design tool provided by Xilinx Inc and MathWorks Inc. It is very convenient for programmers to manipulate the design and debug the function, especially for the complex algorithm. Through the developing process of DDC function based on System Generator, the results show that System Generator is a very fast and efficient tool for FPGA design.

  6. The JPSS Ground Project Algorithm Verification, Test and Evaluation System

    NASA Astrophysics Data System (ADS)

    Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.

    2016-12-01

    The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and science and sensor quality analysis tools. In this presentation we will describe the GRAVITE systems and subsystems, architecture, technical specifications, capabilities and resources, distributed data and products and the latest advances to support the JPSS science algorithm implementation, validation and testing.

  7. QSAR models for predicting octanol/water and organic carbon/water partition coefficients of polychlorinated biphenyls.

    PubMed

    Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J

    2016-04-01

    Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.

  8. A Summary of NASA Research Exploring the Acoustics of Small Unmanned Aerial Systems

    NASA Technical Reports Server (NTRS)

    Zawodny, Nikolas S.; Christian, Andrew; Cabell, Randolph

    2018-01-01

    Proposed uses of small unmanned aerial systems (sUAS) have the potential to expose large portions of communities to a new noise source. In order to understand the potential noise impact of sUAS, NASA initiated acoustics research as one component of the 3-year DELIVER project, with the goal of documenting the feasibility of using existing aircraft design tools and methods on this class of vehicles. This paper summarizes the acoustics research conducted within the DELIVER project. The research described here represents an initial study, and subsequent research building on the findings of this work has been proposed for other NASA projects. The paper summarizes acoustics research in four areas: measurements of noise generated by flyovers of small unmanned aerial vehicles, measurements in controlled test facilities to understand the noise generated by components of these vehicles, computational predictions of component and full vehicle noise, and psychoacoustic tests including auralizations conducted to assess human annoyance to the noise generated by these vehicles.

  9. Predicting Minimum Control Speed on the Ground (VMCG) and Minimum Control Airspeed (VMCA) of Engine Inoperative Flight Using Aerodynamic Database and Propulsion Database Generators

    NASA Astrophysics Data System (ADS)

    Hadder, Eric Michael

    There are many computer aided engineering tools and software used by aerospace engineers to design and predict specific parameters of an airplane. These tools help a design engineer predict and calculate such parameters such as lift, drag, pitching moment, takeoff range, maximum takeoff weight, maximum flight range and much more. However, there are very limited ways to predict and calculate the minimum control speeds of an airplane in engine inoperative flight. There are simple solutions, as well as complicated solutions, yet there is neither standard technique nor consistency throughout the aerospace industry. To further complicate this subject, airplane designers have the option of using an Automatic Thrust Control System (ATCS), which directly alters the minimum control speeds of an airplane. This work addresses this issue with a tool used to predict and calculate the Minimum Control Speed on the Ground (VMCG) as well as the Minimum Control Airspeed (VMCA) of any existing or design-stage airplane. With simple line art of an airplane, a program called VORLAX is used to generate an aerodynamic database used to calculate the stability derivatives of an airplane. Using another program called Numerical Propulsion System Simulation (NPSS), a propulsion database is generated to use with the aerodynamic database to calculate both VMCG and VMCA. This tool was tested using two airplanes, the Airbus A320 and the Lockheed Martin C130J-30 Super Hercules. The A320 does not use an Automatic Thrust Control System (ATCS), whereas the C130J-30 does use an ATCS. The tool was able to properly calculate and match known values of VMCG and VMCA for both of the airplanes. The fact that this tool was able to calculate the known values of VMCG and VMCA for both airplanes means that this tool would be able to predict the VMCG and VMCA of an airplane in the preliminary stages of design. This would allow design engineers the ability to use an Automatic Thrust Control System (ATCS) as part of the design of an airplane and still have the ability to predict the VMCG and VMCA of the airplane.

  10. PDS4: Harnessing the Power of Generate and Apache Velocity

    NASA Astrophysics Data System (ADS)

    Padams, J.; Cayanan, M.; Hardman, S.

    2018-04-01

    The PDS4 Generate Tool is a Java-based command-line tool developed by the Cartography and Imaging Sciences Nodes (PDSIMG) for generating PDS4 XML labels, from Apache Velocity templates and input metadata.

  11. E-novo: an automated workflow for efficient structure-based lead optimization.

    PubMed

    Pearce, Bradley C; Langley, David R; Kang, Jia; Huang, Hongwei; Kulkarni, Amit

    2009-07-01

    An automated E-Novo protocol designed as a structure-based lead optimization tool was prepared through Pipeline Pilot with existing CHARMm components in Discovery Studio. A scaffold core having 3D binding coordinates of interest is generated from a ligand-bound protein structural model. Ligands of interest are generated from the scaffold using an R-group fragmentation/enumeration tool within E-Novo, with their cores aligned. The ligand side chains are conformationally sampled and are subjected to core-constrained protein docking, using a modified CHARMm-based CDOCKER method to generate top poses along with CDOCKER energies. In the final stage of E-Novo, a physics-based binding energy scoring function ranks the top ligand CDOCKER poses using a more accurate Molecular Mechanics-Generalized Born with Surface Area method. Correlation of the calculated ligand binding energies with experimental binding affinities were used to validate protocol performance. Inhibitors of Src tyrosine kinase, CDK2 kinase, beta-secretase, factor Xa, HIV protease, and thrombin were used to test the protocol using published ligand crystal structure data within reasonably defined binding sites. In-house Respiratory Syncytial Virus inhibitor data were used as a more challenging test set using a hand-built binding model. Least squares fits for all data sets suggested reasonable validation of the protocol within the context of observed ligand binding poses. The E-Novo protocol provides a convenient all-in-one structure-based design process for rapid assessment and scoring of lead optimization libraries.

  12. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  13. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2011-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars entry vehicles. A survey was conducted of existing experimental heat-transfer and shock-shape data for high enthalpy, reacting-gas CO2 flows and five relevant test series were selected for comparison to predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared to these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  14. Assessment of Laminar, Convective Aeroheating Prediction Uncertainties for Mars-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.

    2013-01-01

    An assessment of computational uncertainties is presented for numerical methods used by NASA to predict laminar, convective aeroheating environments for Mars-entry vehicles. A survey was conducted of existing experimental heat transfer and shock-shape data for high-enthalpy reacting-gas CO2 flows, and five relevant test series were selected for comparison with predictions. Solutions were generated at the experimental test conditions using NASA state-of-the-art computational tools and compared with these data. The comparisons were evaluated to establish predictive uncertainties as a function of total enthalpy and to provide guidance for future experimental testing requirements to help lower these uncertainties.

  15. Automating generation of textual class definitions from OWL to English.

    PubMed

    Stevens, Robert; Malone, James; Williams, Sandra; Power, Richard; Third, Allan

    2011-05-17

    Text definitions for entities within bio-ontologies are a cornerstone of the effort to gain a consensus in understanding and usage of those ontologies. Writing these definitions is, however, a considerable effort and there is often a lag between specification of the main part of an ontology (logical descriptions and definitions of entities) and the development of the text-based definitions. The goal of natural language generation (NLG) from ontologies is to take the logical description of entities and generate fluent natural language. The application described here uses NLG to automatically provide text-based definitions from an ontology that has logical descriptions of its entities, so avoiding the bottleneck of authoring these definitions by hand. To produce the descriptions, the program collects all the axioms relating to a given entity, groups them according to common structure, realises each group through an English sentence, and assembles the resulting sentences into a paragraph, to form as 'coherent' a text as possible without human intervention. Sentence generation is accomplished using a generic grammar based on logical patterns in OWL, together with a lexicon for realising atomic entities. We have tested our output for the Experimental Factor Ontology (EFO) using a simple survey strategy to explore the fluency of the generated text and how well it conveys the underlying axiomatisation. Two rounds of survey and improvement show that overall the generated English definitions are found to convey the intended meaning of the axiomatisation in a satisfactory manner. The surveys also suggested that one form of generated English will not be universally liked; that intrusion of too much 'formal ontology' was not liked; and that too much explicit exposure of OWL semantics was also not liked. Our prototype tools can generate reasonable paragraphs of English text that can act as definitions. The definitions were found acceptable by our survey and, as a result, the developers of EFO are sufficiently satisfied with the output that the generated definitions have been incorporated into EFO. Whilst not a substitute for hand-written textual definitions, our generated definitions are a useful starting point. An on-line version of the NLG text definition tool can be found at http://swat.open.ac.uk/tools/. The questionaire and sample generated text definitions may be found at http://mcs.open.ac.uk/nlg/SWAT/bio-ontologies.html.

  16. Automating generation of textual class definitions from OWL to English

    PubMed Central

    2011-01-01

    Background Text definitions for entities within bio-ontologies are a cornerstone of the effort to gain a consensus in understanding and usage of those ontologies. Writing these definitions is, however, a considerable effort and there is often a lag between specification of the main part of an ontology (logical descriptions and definitions of entities) and the development of the text-based definitions. The goal of natural language generation (NLG) from ontologies is to take the logical description of entities and generate fluent natural language. The application described here uses NLG to automatically provide text-based definitions from an ontology that has logical descriptions of its entities, so avoiding the bottleneck of authoring these definitions by hand. Results To produce the descriptions, the program collects all the axioms relating to a given entity, groups them according to common structure, realises each group through an English sentence, and assembles the resulting sentences into a paragraph, to form as ‘coherent’ a text as possible without human intervention. Sentence generation is accomplished using a generic grammar based on logical patterns in OWL, together with a lexicon for realising atomic entities. We have tested our output for the Experimental Factor Ontology (EFO) using a simple survey strategy to explore the fluency of the generated text and how well it conveys the underlying axiomatisation. Two rounds of survey and improvement show that overall the generated English definitions are found to convey the intended meaning of the axiomatisation in a satisfactory manner. The surveys also suggested that one form of generated English will not be universally liked; that intrusion of too much ‘formal ontology’ was not liked; and that too much explicit exposure of OWL semantics was also not liked. Conclusions Our prototype tools can generate reasonable paragraphs of English text that can act as definitions. The definitions were found acceptable by our survey and, as a result, the developers of EFO are sufficiently satisfied with the output that the generated definitions have been incorporated into EFO. Whilst not a substitute for hand-written textual definitions, our generated definitions are a useful starting point. Availability An on-line version of the NLG text definition tool can be found at http://swat.open.ac.uk/tools/. The questionaire and sample generated text definitions may be found at http://mcs.open.ac.uk/nlg/SWAT/bio-ontologies.html. PMID:21624160

  17. Evaluation of the efficiency and reliability of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1994-01-01

    There are numerous studies which show that CASE Tools greatly facilitate software development. As a result of these advantages, an increasing amount of software development is done with CASE Tools. As more software engineers become proficient with these tools, their experience and feedback lead to further development with the tools themselves. What has not been widely studied, however, is the reliability and efficiency of the actual code produced by the CASE Tools. This investigation considered these matters. Three segments of code generated by MATRIXx, one of many commercially available CASE Tools, were chosen for analysis: ETOFLIGHT, a portion of the Earth to Orbit Flight software, and ECLSS and PFMC, modules for Environmental Control and Life Support System and Pump Fan Motor Control, respectively.

  18. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    NASA Astrophysics Data System (ADS)

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  19. Developing Family Healthware, a family history screening tool to prevent common chronic diseases.

    PubMed

    Yoon, Paula W; Scheuner, Maren T; Jorgensen, Cynthia; Khoury, Muin J

    2009-01-01

    Family health history reflects the effects of genetic, environmental, and behavioral factors and is an important risk factor for a variety of disorders including coronary heart disease, cancer, and diabetes. In 2004, the Centers for Disease Control and Prevention developed Family Healthware, a new interactive, Web-based tool that assesses familial risk for 6 diseases (coronary heart disease, stroke, diabetes, and colorectal, breast, and ovarian cancer) and provides a "prevention plan" with personalized recommendations for lifestyle changes and screening. The tool collects data on health behaviors, screening tests, and disease history of a person's first- and second-degree relatives. Algorithms in the software analyze the family history data and assess familial risk based on the number of relatives affected, their age at disease onset, their sex, how closely related the relatives are to each other and to the user, and the combinations of diseases in the family. A second set of algorithms uses the data on familial risk level, health behaviors, and screening to generate personalized prevention messages. Qualitative and quantitative formative research on lay understanding of family history and genetics helped shape the tool's content, labels, and messages. Lab-based usability testing helped refine messages and tool navigation. The tool is being evaluated by 3 academic centers by using a network of primary care practices to determine whether personalized prevention messages tailored to familial risk will motivate people at risk to change their lifestyles or screening behaviors.

  20. Entropy Generation/Availability Energy Loss Analysis Inside MIT Gas Spring and "Two Space" Test Rigs

    NASA Technical Reports Server (NTRS)

    Ebiana, Asuquo B.; Savadekar, Rupesh T.; Patel, Kaushal V.

    2006-01-01

    The results of the entropy generation and availability energy loss analysis under conditions of oscillating pressure and oscillating helium gas flow in two Massachusetts Institute of Technology (MIT) test rigs piston-cylinder and piston-cylinder-heat exchanger are presented. Two solution domains, the gas spring (single-space) in the piston-cylinder test rig and the gas spring + heat exchanger (two-space) in the piston-cylinder-heat exchanger test rig are of interest. Sage and CFD-ACE+ commercial numerical codes are used to obtain 1-D and 2-D computer models, respectively, of each of the two solution domains and to simulate the oscillating gas flow and heat transfer effects in these domains. Second law analysis is used to characterize the entropy generation and availability energy losses inside the two solution domains. Internal and external entropy generation and availability energy loss results predicted by Sage and CFD-ACE+ are compared. Thermodynamic loss analysis of simple systems such as the MIT test rigs are often useful to understand some important features of complex pattern forming processes in more complex systems like the Stirling engine. This study is aimed at improving numerical codes for the prediction of thermodynamic losses via the development of a loss post-processor. The incorporation of loss post-processors in Stirling engine numerical codes will facilitate Stirling engine performance optimization. Loss analysis using entropy-generation rates due to heat and fluid flow is a relatively new technique for assessing component performance. It offers a deep insight into the flow phenomena, allows a more exact calculation of losses than is possible with traditional means involving the application of loss correlations and provides an effective tool for improving component and overall system performance.

  1. Condition monitoring of turning process using infrared thermography technique - An experimental approach

    NASA Astrophysics Data System (ADS)

    Prasad, Balla Srinivasa; Prabha, K. Aruna; Kumar, P. V. S. Ganesh

    2017-03-01

    In metal cutting machining, major factors that affect the cutting tool life are machine tool vibrations, tool tip/chip temperature and surface roughness along with machining parameters like cutting speed, feed rate, depth of cut, tool geometry, etc., so it becomes important for the manufacturing industry to find the suitable levels of process parameters for obtaining maintaining tool life. Heat generation in cutting was always a main topic to be studied in machining. Recent advancement in signal processing and information technology has resulted in the use of multiple sensors for development of the effective monitoring of tool condition monitoring systems with improved accuracy. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In the present work, a real time process monitoring method is explored using multiple sensors. It focuses on the development of a test bed for monitoring the tool condition in turning of AISI 316L steel by using both coated and uncoated carbide inserts. Proposed tool condition monitoring (TCM) is evaluated in the high speed turning using multiple sensors such as Laser Doppler vibrometer and infrared thermography technique. The results indicate the feasibility of using the dominant frequency of the vibration signals for the monitoring of high speed turning operations along with temperatures gradient. A possible correlation is identified in both regular and irregular cutting tool wear. While cutting speed and feed rate proved to be influential parameter on the depicted temperatures and depth of cut to be less influential. Generally, it is observed that lower heat and temperatures are generated when coated inserts are employed. It is found that cutting temperatures are gradually increased as edge wear and deformation developed.

  2. Application of 'Six Sigma{sup TM}' and 'Design of Experiment' for Cementation - Recipe Development for Evaporator Concentrate for NPP Ling AO, Phase II (China) - 12555

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehrmann, Henning; Perdue, Robert

    2012-07-01

    Cementation of radioactive waste is a common technology. The waste is mixed with cement and water and forms a stable, solid block. The physical properties like compression strength or low leach ability depends strongly on the cement recipe. Due to the fact that this waste cement mixture has to fulfill special requirements, a recipe development is necessary. The Six Sigma{sup TM}' DMAIC methodology, together with the Design of experiment (DoE) approach, was employed to optimize the process of a recipe development for cementation at the Ling Ao nuclear power plant (NPP) in China. The DMAIC offers a structured, systematical andmore » traceable process to derive test parameters. The DoE test plans and statistical analysis is efficient regarding the amount of test runs and the benefit gain by getting a transfer function. A transfer function enables simulation which is useful to optimize the later process and being responsive to changes. The DoE method was successfully applied for developing a cementation recipe for both evaporator concentrate and resin waste in the plant. The key input parameters were determined, evaluated and the control of these parameters were included into the design. The applied Six Sigma{sup TM} tools can help to organize the thinking during the engineering process. Data are organized and clearly presented. Various variables can be limited to the most important ones. The Six Sigma{sup TM} tools help to make the thinking and decision process trace able. The tools can help to make data driven decisions (e.g. C and E Matrix). But the tools are not the only golden way. Results from scoring tools like the C and E Matrix need close review before using them. The DoE is an effective tool for generating test plans. DoE can be used with a small number of tests runs, but gives a valuable result from an engineering perspective in terms of a transfer function. The DoE prediction results, however, are only valid in the tested area. So a careful selection of input parameter and their limits for setting up a DoE is very important. An extrapolation of results is not recommended because the results are not reliable out of the tested area. (authors)« less

  3. Real-time high speed generator system emulation with hardware-in-the-loop application

    NASA Astrophysics Data System (ADS)

    Stroupe, Nicholas

    The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.

  4. Thermal protection system (TPS) monitoring using acoustic emission

    NASA Astrophysics Data System (ADS)

    Hurley, D. A.; Huston, D. R.; Fletcher, D. G.; Owens, W. P.

    2011-04-01

    This project investigates acoustic emission (AE) as a tool for monitoring the degradation of thermal protection systems (TPS). The AE sensors are part of an array of instrumentation on an inductively coupled plasma (ICP) torch designed for testing advanced thermal protection aerospace materials used for hypervelocity vehicles. AE are generated by stresses within the material, propagate as elastic stress waves, and can be detected with sensitive instrumentation. Graphite (POCO DFP-2) is used to study gas-surface interaction during degradation of thermal protection materials. The plasma is produced by a RF magnetic field driven by a 30kW power supply at 3.5 MHz, which creates a noisy environment with large spikes when powered on or off. AE are waveguided from source to sensor by a liquid-cooled copper probe used to position the graphite sample in the plasma stream. Preliminary testing was used to set filters and thresholds on the AE detection system (Physical Acoustics PCI-2) to minimize the impact of considerable operating noise. Testing results show good correlation between AE data and testing environment, which dictates the physics and chemistry of the thermal breakdown of the sample. Current efforts for the project are expanding the dataset and developing statistical analysis tools. This study shows the potential of AE as a powerful tool for analysis of thermal protection material thermal degradations with the unique capability of real-time, in-situ monitoring.

  5. ESAS Deliverable PS 1.1.2.3: Customer Survey on Code Generations in Safety-Critical Applications

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Denney, Ewen

    2006-01-01

    Automated code generators (ACG) are tools that convert a (higher-level) model of a software (sub-)system into executable code without the necessity for a developer to actually implement the code. Although both commercially supported and in-house tools have been used in many industrial applications, little data exists on how these tools are used in safety-critical domains (e.g., spacecraft, aircraft, automotive, nuclear). The aims of the survey, therefore, were threefold: 1) to determine if code generation is primarily used as a tool for prototyping, including design exploration and simulation, or for fiight/production code; 2) to determine the verification issues with code generators relating, in particular, to qualification and certification in safety-critical domains; and 3) to determine perceived gaps in functionality of existing tools.

  6. A material based approach to creating wear resistant surfaces for hot forging

    NASA Astrophysics Data System (ADS)

    Babu, Sailesh

    Tools and dies used in metal forming are characterized by extremely high temperatures at the interface, high local pressures and large metal to metal sliding. These harsh conditions result in accelerated wear of tooling. Lubrication of tools, done to improve metal flow drastically quenches the surface layers of the tools and compounds the tool failure problem. This phenomenon becomes a serious issue when parts forged at complex and are expected to meet tight tolerances. Unpredictable and hence uncontrolled wear and degradation of tooling result in poor part quality and premature tool failure that result in high scrap, shop downtime, poor efficiency and high cost. The objective of this dissertation is to develop a computer-based methodology for analyzing the requirements hot forging tooling to resist wear and plastic deformation and wear and predicting life cycle of forge tooling. Development of such is a system is complicated by the fact that wear and degradation of tooling is influenced by not only the die material used but also numerous process controls like lubricant, dilution ratio, forging temperature, equipment used, tool geometries among others. Phenomenological models available u1 the literature give us a good thumb rule to selecting materials but do not provide a way to evaluate pits performance in field. Once a material is chosen, there are no proven approaches to create surfaces out of these materials. Coating approaches like PVD and CVD cannot generate thick coatings necessary to withstand the conditions under hot forging. Welding cannot generate complex surfaces without several secondary operations like heat treating and machining. If careful procedures are not followed, welds crack and seldom survive forging loads. There is a strong need for an approach to selectively, reliably and precisely deposit material of choice reliably on an existing surface which exhibit not only good tribological properties but also good adhesion to the substrate. Dissertation outlines development of a new cyclic contact test design to recreate intermittent tempering seen in hot forging. This test has been used to validate the use of tempering parameters in modeling of in-service softening of tool steel surfaces. The dissertation also outlines an industrial case study, conducted at a forging company, to validate the wear model. This dissertation also outlines efforts at Ohio State University, to deposit Nickel Aluminide on AISI H13 substrate, using Laser Engineered Net Shaping (LENS). Dissertation reports results from an array of experiments conducted using LENS 750 machine, at various power levels, table speeds and hatch spacing. Results pertaining to bond quality, surface finish, compositional gradients and hardness are provided. Also, a thermal-based finite element numerical model that was used to simulate the LENS process is presented, along with some demonstrated results.

  7. Process development and tooling design for intrinsic hybrid composites

    NASA Astrophysics Data System (ADS)

    Riemer, M.; Müller, R.; Drossel, W. G.; Landgrebe, D.

    2017-09-01

    Hybrid parts, which combine the advantages of different material classes, are moving into the focus of lightweight applications. This development is amplified by their high potential for usage in the field of crash relevant structures. By the current state of the art, hybrid parts are mainly made in separate, subsequent forming and joining processes. By using the concept of an intrinsic hybrid, the shaping of the part and the joining of the different materials are performed in a single process step for shortening the overall processing time and thereby the manufacturing costs. The investigated hybrid part is made from continuous fibre reinforced plastic (FRP), in which a metallic reinforcement structure is integrated. The connection between these layered components is realized by a combination of adhesive bonding and a geometrical form fit. The form fit elements are intrinsically generated during the forming process. This contribution regards the development of the forming process and the design of the forming tool for the single step production of a hybrid part. To this end a forming tool, which combines the thermo-forming and the metal forming process, is developed. The main challenge by designing the tool is the temperature management of the tool elements for the variothermal forming process. The process parameters are determined in basic tests and finite element (FE) simulation studies. On the basis of these investigations a control concept for the steering of the motion axes and the tool temperature is developed. Forming tests are carried out with the developed tool and the manufactured parts are analysed by computer assisted tomography (CT) scans.

  8. Data Mining Techniques Applied to Hydrogen Lactose Breath Test.

    PubMed

    Rubio-Escudero, Cristina; Valverde-Fernández, Justo; Nepomuceno-Chamorro, Isabel; Pontes-Balanza, Beatriz; Hernández-Mendoza, Yoedusvany; Rodríguez-Herrera, Alfonso

    2017-01-01

    Analyze a set of data of hydrogen breath tests by use of data mining tools. Identify new patterns of H2 production. Hydrogen breath tests data sets as well as k-means clustering as the data mining technique to a dataset of 2571 patients. Six different patterns have been extracted upon analysis of the hydrogen breath test data. We have also shown the relevance of each of the samples taken throughout the test. Analysis of the hydrogen breath test data sets using data mining techniques has identified new patterns of hydrogen generation upon lactose absorption. We can see the potential of application of data mining techniques to clinical data sets. These results offer promising data for future research on the relations between gut microbiota produced hydrogen and its link to clinical symptoms.

  9. Semi-autonomous remote sensing time series generation tool

    NASA Astrophysics Data System (ADS)

    Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher

    2017-10-01

    High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.

  10. Chemometrics-based process analytical technology (PAT) tools: applications and adaptation in pharmaceutical and biopharmaceutical industries.

    PubMed

    Challa, Shruthi; Potumarthi, Ravichandra

    2013-01-01

    Process analytical technology (PAT) is used to monitor and control critical process parameters in raw materials and in-process products to maintain the critical quality attributes and build quality into the product. Process analytical technology can be successfully implemented in pharmaceutical and biopharmaceutical industries not only to impart quality into the products but also to prevent out-of-specifications and improve the productivity. PAT implementation eliminates the drawbacks of traditional methods which involves excessive sampling and facilitates rapid testing through direct sampling without any destruction of sample. However, to successfully adapt PAT tools into pharmaceutical and biopharmaceutical environment, thorough understanding of the process is needed along with mathematical and statistical tools to analyze large multidimensional spectral data generated by PAT tools. Chemometrics is a chemical discipline which incorporates both statistical and mathematical methods to obtain and analyze relevant information from PAT spectral tools. Applications of commonly used PAT tools in combination with appropriate chemometric method along with their advantages and working principle are discussed. Finally, systematic application of PAT tools in biopharmaceutical environment to control critical process parameters for achieving product quality is diagrammatically represented.

  11. Microstructural Evolution in Friction Stir Welding of Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Rubisoff, H.; Querin, J.; Magee, D.; Schneider, J.

    2008-01-01

    Friction stir welding (FSW) is a thermo-mechanical process that utilizes a nonconsumable rotating pin tool to consolidate a weld joint. In the conventional FSW process, the pin tool is responsible for generating both the heat required to soften the material and the forces necessary to deform and combine the weld seam. As such, the geometry of the pin tool is important to the quality of the weld and the process parameters required to produce the weld. Because the geometry of the pin tool is limitless, a reduced set of pin tools was formed to systematically study their effect on the weldment with respect to mechanical properties and resultant microstructure. In this study 0deg, 15deg, 30deg, 45deg, and 60deg tapered, microwave sintered, tungsten carbide (WC) pin tools were used to FSW Ti-6Al-4V. Transverse sections of the weld were used to test for mechanical properties and to document the microstructure using optical microscopy. X-ray diffraction (XRD) was also used to characterize the microstructure in the welds. FSW results for the 45deg and 60deg pin tools are reported in this paper.

  12. The ELPAT living organ donor Psychosocial Assessment Tool (EPAT): from 'what' to 'how' of psychosocial screening - a pilot study.

    PubMed

    Massey, Emma K; Timmerman, Lotte; Ismail, Sohal Y; Duerinckx, Nathalie; Lopes, Alice; Maple, Hannah; Mega, Inês; Papachristou, Christina; Dobbels, Fabienne

    2018-01-01

    Thorough psychosocial screening of donor candidates is required in order to minimize potential negative consequences and to strive for optimal safety within living donation programmes. We aimed to develop an evidence-based tool to standardize the psychosocial screening process. Key concepts of psychosocial screening were used to structure our tool: motivation and decision-making, personal resources, psychopathology, social resources, ethical and legal factors and information and risk processing. We (i) discussed how each item per concept could be measured, (ii) reviewed and rated available validated tools, (iii) where necessary developed new items, (iv) assessed content validity and (v) pilot-tested the new items. The resulting ELPAT living organ donor Psychosocial Assessment Tool (EPAT) consists of a selection of validated questionnaires (28 items in total), a semi-structured interview (43 questions) and a Red Flag Checklist. We outline optimal procedures and conditions for implementing this tool. The EPAT and user manual are available from the authors. Use of this tool will standardize the psychosocial screening procedure ensuring that no psychosocial issues are overlooked and ensure that comparable selection criteria are used and facilitate generation of comparable psychosocial data on living donor candidates. © 2017 Steunstichting ESOT.

  13. Development and Validity Testing of the Worksite Health Index: An Assessment Tool to Help and Improve Korean Employees' Health-Related Outcome.

    PubMed

    Yun, Young Ho; Sim, Jin Ah; Lim, Ye Jin; Lim, Cheol Il; Kang, Sung-Choon; Kang, Joon-Ho; Park, Jun Dong; Noh, Dong Young

    2016-06-01

    The objective of this study was to develop the Worksite Health Index (WHI) and validate its psychometric properties. The development of the WHI questionnaire included item generation, item construction, and field testing. To assess the instrument's reliability and validity, we recruited 30 different Korean worksites. We developed the WHI questionnaire of 136 items categorized into five domains, namely Governance and Infrastructure, Need Assessment and Planning, Health Prevention and Promotion Program, Occupational Safety, and Monitoring and Feedback. All WHI domains demonstrated a high reliability with good internal consistency. The total WHI scores differentiated worksite groups effectively according to firm size. Each domain was associated significantly with employees' health status, absence, and financial outcome. The WHI can assess comprehensive worksite health programs. This tool is publicly available for addressing the growing need for worksite health programs.

  14. Control of flexible structures

    NASA Technical Reports Server (NTRS)

    Russell, R. A.

    1985-01-01

    The requirements for future space missions indicate that many of these spacecraft will be large, flexible, and in some applications, require precision geometries. A technology program that addresses the issues associated with the structure/control interactions for these classes of spacecraft is discussed. The goal of the NASA control of flexible structures technology program is to generate a technology data base that will provide the designer with options and approaches to achieve spacecraft performance such as maintaining geometry and/or suppressing undesired spacecraft dynamics. This technology program will define the appropriate combination of analysis, ground testing, and flight testing required to validate the structural/controls analysis and design tools. This work was motivated by a recognition that large minimum weight space structures will be required for many future missions. The tools necessary to support such design included: (1) improved structural analysis; (2) modern control theory; (3) advanced modeling techniques; (4) system identification; and (5) the integration of structures and controls.

  15. Remote Visualization and Remote Collaboration On Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).

  16. Computer generated maps from digital satellite data - A case study in Florida

    NASA Technical Reports Server (NTRS)

    Arvanitis, L. G.; Reich, R. M.; Newburne, R.

    1981-01-01

    Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.

  17. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    PubMed

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Development of Anthropometric Analogous Headforms. Phase 1.

    DTIC Science & Technology

    1994-10-31

    shown in figure 5. This surface mesh can then be transformed into polygon faces that are able to be rendered by the AutoCAD rendering tools . Rendering of...computer-generated surfaces. The material removal techniques require the programming of the tool path of the cutter and in some cases requires specialized... tooling . Tool path programs are available to transfer the computer-generated surface into actual paths of the cutting tool . In cases where the

  19. Power Plant Model Validation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less

  20. Application of Multimedia Design Principles to Visuals Used in Course-Books: An Evaluation Tool

    ERIC Educational Resources Information Center

    Kuzu, Abdullah; Akbulut, Yavuz; Sahin, Mehmet Can

    2007-01-01

    This paper introduces an evaluation tool prepared to examine the quality of visuals in course-books. The tool is based on Mayer's Cognitive Theory of Multimedia Learning (i.e. Generative Theory) and its principles regarding the correct use of illustrations within text. The reason to generate the tool, the development process along with the…

  1. Cryopyrin-associated periodic syndromes: development of a patient-reported outcomes instrument to assess the pattern and severity of clinical disease activity.

    PubMed

    Hoffman, Hal M; Wolfe, Frederick; Belomestnov, Pavel; Mellis, Scott J

    2008-09-01

    Development of an instrument for characterization of symptom patterns and severity in patients with cryopyrin-associated periodic syndromes (CAPS). Two generations of daily health assessment forms (DHAFs) were evaluated in this study. The first-generation DHAF queried 11 symptoms. Analyses of results obtained with that instrument identified five symptoms included in a revised second-generation DHAF that was tested for internal consistency and test-retest reliability. This DHAF was also assessed during the initial portion of a phase 3 clinical study of CAPS treatment. Forty-eight CAPS patients provided data for the first-generation DHAFs. Five symptoms (rash, fever, joint pain, eye redness/pain, and fatigue) were included in the revised second-generation DHAF. Symptom severity was highly variable during all study phases with as many as 89% of patients reporting at least one symptom flare, and percentages of days with flares reaching 58% during evaluation of the second-generation instrument. Mean composite key symptom scores (KSSs) computed during evaluation of the second-generation DHAF correlated well with Physician's Global Assessment of Disease Activity (r=0.91, p<0.0001) and patient reports of limitations of daily activities (r=0.68, p<0.0001). Test-retest reliability and Cronbach's alpha's were high (0.93 and 0.94, respectively) for the second-generation DHAF. Further evaluation of this DHAF during a baseline period and placebo treatment in a phase 3 clinical study of CAPS patients indicated strong correlations between baseline KSS and Physician's Global Assessment of Disease Activity. Cronbach's alpha's at baseline and test-retest reliability were also high. Potentially important study limitations include small sample size, the lack of a standard tool for CAPS symptom assessment against which to validate the DHAF, and no assessment of the instrument's responsivity to CAPS therapy. The DHAF is a new instrument that may be useful for capturing symptom patterns and severity in CAPS patients and monitoring responses to therapies for these conditions.

  2. ConsDock: A new program for the consensus analysis of protein-ligand interactions.

    PubMed

    Paul, Nicodème; Rognan, Didier

    2002-06-01

    Protein-based virtual screening of chemical libraries is a powerful technique for identifying new molecules that may interact with a macromolecular target of interest. Because of docking and scoring limitations, it is more difficult to apply as a lead optimization method because it requires that the docking/scoring tool is able to propose as few solutions as possible and all of them with a very good accuracy for both the protein-bound orientation and the conformation of the ligand. In the present study, we present a consensus docking approach (ConsDock) that takes advantage of three widely used docking tools (Dock, FlexX, and Gold). The consensus analysis of all possible poses generated by several docking tools is performed sequentially in four steps: (i) hierarchical clustering of all poses generated by a docking tool into families represented by a leader; (ii) definition of all consensus pairs from leaders generated by different docking programs; (iii) clustering of consensus pairs into classes, represented by a mean structure; and (iv) ranking the different means starting from the most populated class of consensus pairs. When applied to a test set of 100 protein-ligand complexes from the Protein Data Bank, ConsDock significantly outperforms single docking with respect to the docking accuracy of the top-ranked pose. In 60% of the cases investigated here, ConsDock was able to rank as top solution a pose within 2 A RMSD of the X-ray structure. It can be applied as a postprocessing filter to either single- or multiple-docking programs to prioritize three-dimensional guided lead optimization from the most likely docking solution. Copyright 2002 Wiley-Liss, Inc.

  3. Effectiveness of a Technology-Based Intervention to Teach Evidence-Based Practice: The EBR Tool.

    PubMed

    Long, JoAnn D; Gannaway, Paula; Ford, Cindy; Doumit, Rita; Zeeni, Nadine; Sukkarieh-Haraty, Ola; Milane, Aline; Byers, Beverly; Harrison, LaNell; Hatch, Daniel; Brown, Justin; Proper, Sharlan; White, Patricia; Song, Huaxin

    2016-02-01

    As the world becomes increasingly digital, advances in technology have changed how students access evidence-based information. Research suggests that students overestimate their ability to locate quality online research and lack the skills needed to evaluate the scientific literature. Clinical nurses report relying on personal experience to answer clinical questions rather than searching evidence-based sources. To address the problem, a web-based, evidence-based research (EBR) tool that is usable from a computer, smartphone, or iPad was developed and tested. The purpose of the EBR tool is to guide students through the basic steps needed to locate and critically appraise the online scientific literature while linking users to quality electronic resources to support evidence-based practice (EBP). Testing of the tool took place in a mixed-method, quasi-experimental, and two-population randomized controlled trial (RCT) design in a U.S. and Middle East university. A statistically significant improvement in overall research skills was supported in the quasi-experimental nursing student group and RCT nutrition student group using the EBR tool. A statistically significant proportional difference was supported in the RCT nutrition and PharmD intervention groups in participants' ability to distinguish the credibility of online source materials compared with controls. The majority of participants could correctly apply PICOTS to a case study when using the tool. The data from this preliminary study suggests that the EBR tool enhanced student overall research skills and selected EBP skills while generating data for assessment of learning outcomes. The EBR tool places evidence-based resources at the fingertips of users by addressing some of the most commonly cited barriers to research utilization while exposing users to information and online literacy standards of practice, meeting a growing need within nursing curricula. © 2016 Sigma Theta Tau International.

  4. A Modeling Tool for Household Biogas Burner Flame Port Design

    NASA Astrophysics Data System (ADS)

    Decker, Thomas J.

    Anaerobic digestion is a well-known and potentially beneficial process for rural communities in emerging markets, providing the opportunity to generate usable gaseous fuel from agricultural waste. With recent developments in low-cost digestion technology, communities across the world are gaining affordable access to the benefits of anaerobic digestion derived biogas. For example, biogas can displace conventional cooking fuels such as biomass (wood, charcoal, dung) and Liquefied Petroleum Gas (LPG), effectively reducing harmful emissions and fuel cost respectively. To support the ongoing scaling effort of biogas in rural communities, this study has developed and tested a design tool aimed at optimizing flame port geometry for household biogas-fired burners. The tool consists of a multi-component simulation that incorporates three-dimensional CAD designs with simulated chemical kinetics and computational fluid dynamics. An array of circular and rectangular port designs was developed for a widely available biogas stove (called the Lotus) as part of this study. These port designs were created through guidance from previous studies found in the literature. The three highest performing designs identified by the tool were manufactured and tested experimentally to validate tool output and to compare against the original port geometry. The experimental results aligned with the tool's prediction for the three chosen designs. Each design demonstrated improved thermal efficiency relative to the original, with one configuration of circular ports exhibiting superior performance. The results of the study indicated that designing for a targeted range of port hydraulic diameter, velocity and mixture density in the tool is a relevant way to improve the thermal efficiency of a biogas burner. Conversely, the emissions predictions made by the tool were found to be unreliable and incongruent with laboratory experiments.

  5. Increasing power generation in horizontal axis wind turbines using optimized flow control

    NASA Astrophysics Data System (ADS)

    Cooney, John A., Jr.

    In order to effectively realize future goals for wind energy, the efficiency of wind turbines must increase beyond existing technology. One direct method for achieving increased efficiency is by improving the individual power generation characteristics of horizontal axis wind turbines. The potential for additional improvement by traditional approaches is diminishing rapidly however. As a result, a research program was undertaken to assess the potential of using distributed flow control to increase power generation. The overall objective was the development of validated aerodynamic simulations and flow control approaches to improve wind turbine power generation characteristics. BEM analysis was conducted for a general set of wind turbine models encompassing last, current, and next generation designs. This analysis indicated that rotor lift control applied in Region II of the turbine power curve would produce a notable increase in annual power generated. This was achieved by optimizing induction factors along the rotor blade for maximum power generation. In order to demonstrate this approach and other advanced concepts, the University of Notre Dame established the Laboratory for Enhanced Wind Energy Design (eWiND). This initiative includes a fully instrumented meteorological tower and two pitch-controlled wind turbines. The wind turbines are representative in their design and operation to larger multi-megawatt turbines, but of a scale that allows rotors to be easily instrumented and replaced to explore new design concepts. Baseline data detailing typical site conditions and turbine operation is presented. To realize optimized performance, lift control systems were designed and evaluated in CFD simulations coupled with shape optimization tools. These were integrated into a systematic design methodology involving BEM simulations, CFD simulations and shape optimization, and selected experimental validation. To refine and illustrate the proposed design methodology, a complete design cycle was performed for the turbine model incorporated in the wind energy lab. Enhanced power generation was obtained through passive trailing edge shaping aimed at reaching lift and lift-to-drag goals predicted to optimize performance. These targets were determined by BEM analysis to improve power generation characteristics and annual energy production (AEP) for the wind turbine. A preliminary design was validated in wind tunnel experiments on a 2D rotor section in preparation for testing in the full atmospheric environment of the eWiND Laboratory. These tests were performed for the full-scale geometry and atmospheric conditions. Upon making additional improvements to the shape optimization tools, a series of trailing edge additions were designed to optimize power generation. The trailing edge additions were predicted to increase the AEP by up to 4.2% at the White Field site. The pieces were rapid-prototyped and installed on the wind turbine in March, 2014. Field tests are ongoing.

  6. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Benard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20% semispan), Midspan (64% semispan) and Outboard stations (83% semispan) of a wing based upon a 65% scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 deg C to -1.4 deg C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 deg C to -6.3 deg C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.

  7. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Bernard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20 percent semispan), Midspan (64 percent semispan) and Outboard stations (83 percent semispan) of a wing based upon a 65 percent scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 to -1.4 C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 to -6.3 C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.

  8. Space Laboratory on a Table Top: A Next Generative ECLSS design and diagnostic tool

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    This paper describes the development plan for a comprehensive research and diagnostic tool for aspects of advanced life support systems in space-based laboratories. Specifically it aims to build a high fidelity tabletop model that can be used for the purpose of risk mitigation, failure mode analysis, contamination tracking, and testing reliability. We envision a comprehensive approach involving experimental work coupled with numerical simulation to develop this diagnostic tool. It envisions a 10% scale transparent model of a space platform such as the International Space Station that operates with water or a specific matched index of refraction liquid as the working fluid. This allows the scaling of a 10 ft x 10 ft x 10 ft room with air flow to 1 ft x 1 ft x 1 ft tabletop model with water/liquid flow. Dynamic similitude for this length scale dictates model velocities to be 67% of full-scale and thereby the time scale of the model to represent 15% of the full- scale system; meaning identical processes in the model are completed in 15% of the full- scale-time. The use of an index matching fluid (fluid that matches the refractive index of cast acrylic, the model material) allows making the entire model (with complex internal geometry) transparent and hence conducive to non-intrusive optical diagnostics. So using such a system one can test environment control parameters such as core flows (axial flows), cross flows (from registers and diffusers), potential problem areas such as flow short circuits, inadequate oxygen content, build up of other gases beyond desirable levels, test mixing processes within the system at local nodes or compartments and assess the overall system performance. The system allows quantitative measurements of contaminants introduced in the system and allows testing and optimizing the tracking process and removal of contaminants. The envisaged system will be modular and hence flexible for quick configuration change and subsequent testing. The data and inferences from the tests will allow for improvements in the development and design of next generation life support systems and configurations. Preliminary experimental and modeling work in this area will be presented. This involves testing of a single inlet-exit model with detailed 3-D flow visualization and quantitative diagnostics and computational modeling of the system.

  9. A Study of the Effects of Large Scale Gust Generation in a Small Scale Atmospheric Wind Tunnel: Application to Micro Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Roadman, Jason; Mohseni, Kamran

    2009-11-01

    Modern technology operating in the atmospheric boundary layer could benefit from more accurate wind tunnel testing. While scaled atmospheric boundary layer tunnels have been well developed, tunnels replicating portions of the turbulence of the atmospheric boundary layer at full scale are a comparatively new concept. Testing at full-scale Reynolds numbers with full-scale turbulence in an ``atmospheric wind tunnel'' is sought. Many programs could utilize such a tool including that of Micro Aerial Vehicles (MAVs) and other unmanned aircraft, the wind energy industry, fuel efficient vehicles, and the study of bird and insect fight. The construction of an active ``gust generator'' for a new atmospheric tunnel is reviewed and the turbulence it generates is measured utilizing single and cross hot wires. Results from this grid are compared to atmospheric turbulence and it is shown that various gust strengths can be produced corresponding to days ranging from calm to quite gusty. An initial test is performed in the atmospheric wind tunnel whereby the effects of various turbulence conditions on transition and separation on the upper surface of a MAV wing is investigated using oil flow visualization.

  10. Computer-generated holograms (CGH) realization: the integration of dedicated software tool with digital slides printer

    NASA Astrophysics Data System (ADS)

    Guarnieri, Vittorio; Francini, Franco

    1997-12-01

    Last generation of digital printer is usually characterized by a spatial resolution enough high to allow the designer to realize a binary CGH directly on a transparent film avoiding photographic reduction techniques. These devices are able to produce slides or offset prints. Furthermore, services supplied by commercial printing company provide an inexpensive method to rapidly verify the validity of the design by means of a test-and-trial process. Notably, this low-cost approach appears to be suitable for a didactical environment. On the basis of these considerations, a set of software tools able to design CGH's has been developed. The guidelines inspiring the work have been the following ones: (1) ray-tracing approach, considering the object to be reproduced as source of spherical waves; (2) Optimization and speed-up of the algorithms used, in order to produce a portable code, runnable on several hardware platforms. In this paper calculation methods to obtain some fundamental geometric functions (points, lines, curves) are described. Furthermore, by the juxtaposition of these primitives functions it is possible to produce the holograms of more complex objects. Many examples of generated CGHs are presented.

  11. The mission events graphic generator software: A small tool with big results

    NASA Technical Reports Server (NTRS)

    Lupisella, Mark; Leibee, Jack; Scaffidi, Charles

    1993-01-01

    Utilization of graphics has long been a useful methodology for many aspects of spacecraft operations. A personal computer based software tool that implements straight-forward graphics and greatly enhances spacecraft operations is presented. This unique software tool is the Mission Events Graphic Generator (MEGG) software which is used in support of the Hubble Space Telescope (HST) Project. MEGG reads the HST mission schedule and generates a graphical timeline.

  12. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  13. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  14. Simulating Next-Generation Sequencing Datasets from Empirical Mutation and Sequencing Models

    PubMed Central

    Stephens, Zachary D.; Hudson, Matthew E.; Mainzer, Liudmila S.; Taschuk, Morgan; Weber, Matthew R.; Iyer, Ravishankar K.

    2016-01-01

    An obstacle to validating and benchmarking methods for genome analysis is that there are few reference datasets available for which the “ground truth” about the mutational landscape of the sample genome is known and fully validated. Additionally, the free and public availability of real human genome datasets is incompatible with the preservation of donor privacy. In order to better analyze and understand genomic data, we need test datasets that model all variants, reflecting known biology as well as sequencing artifacts. Read simulators can fulfill this requirement, but are often criticized for limited resemblance to true data and overall inflexibility. We present NEAT (NExt-generation sequencing Analysis Toolkit), a set of tools that not only includes an easy-to-use read simulator, but also scripts to facilitate variant comparison and tool evaluation. NEAT has a wide variety of tunable parameters which can be set manually on the default model or parameterized using real datasets. The software is freely available at github.com/zstephens/neat-genreads. PMID:27893777

  15. Enhancing efficiency and quality of statistical estimation of immunogenicity assay cut points through standardization and automation.

    PubMed

    Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don

    2015-10-01

    Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Annual Historical Review.

    DTIC Science & Technology

    1987-01-01

    8217. ’. .4,. .. *4*’. 5.4* 4 4.. .4- *44 =1. 44* 4 .4 %SS 4- MAJOR GENERAL FRED HISSONG, JR. Commanding General US Army Armament, Munitions and Chemical...Command 4, 44 44. 3 4’ ~ 4~\\S~4~5........................ . . - AMCCOM Deputy Commanding Generals vii. % % TABLE OF CONTENTS Chapter p% i! COMMAND...Handling for Brake and Clutch Repair V 54 Steam Cleaners V 54 Tool Improvement Program Suggestions V 54-. Test Stand Automotive Generator , Alternator

  17. Preliminary Component Integration Using Rapid Prototyping Techniques

    NASA Technical Reports Server (NTRS)

    Cooper, Ken; Salvail, Pat; Gordon, Gail (Technical Monitor)

    2001-01-01

    Rapid prototyping is a very important tool that should be used by both design and manufacturing disciplines during the development of elements for the aerospace industry. It helps prevent lack of adequate communication between design and manufacturing engineers (which could lead to costly errors) through mutual consideration of functional models generated from drawings. Rapid prototyping techniques are used to test hardware for design and material compatibility at Marshall Space Flight Center.

  18. Analyzing Communication Architectures Using Commercial Off-The-Shelf (COTS) Modeling and Simulation Tools

    DTIC Science & Technology

    1998-06-01

    4] By 2010, we should be able to change how we conduct the most intense joint operations. Instead of relying on massed forces and sequential ...not independent, sequential steps. Data probes to support the analysis phase were required to complete the logical models. This generated a need...Networks) Identify Granularity (System Level) - Establish Physical Bounds or Limits to Systems • Determine System Test Configuration and Lineup

  19. Solar Sail Propulsion Technology at NASA

    NASA Technical Reports Server (NTRS)

    Johnson, Charles Les

    2007-01-01

    NASA's In-Space Propulsion Technology Program developed the first generation of solar sail propulsion systems sufficient to accomplish inner solar system science and exploration missions. These first generation solar sails, when operational, will range in size from 40 meters to well over 100 meters in diameter and have an area density of less than 13 grams per square meter. A rigorous, multi-year technology development effort culminated in 2005 with the testing of two different 20-m solar sail systems under thermal vacuum conditions. This effort provided a number of significant insights into the optimal design and expected performance of solar sails as well as an understanding of the methods and costs of building and using them. In addition, solar sail orbital analysis tools for mission design were developed and tested. Laboratory simulations of the effects of long-term space radiation exposure were also conducted on two candidate solar sail materials. Detailed radiation and charging environments were defined for mission trajectories outside the protection of the earth's magnetosphere, in the solar wind environment. These were used in other analytical tools to prove the adequacy of sail design features for accommodating the harsh space environment. The presentation will describe the status of solar sail propulsion within NASA, near-term solar sail mission applications, and near-term plans for further development.

  20. Patient-Generated Subjective Global Assessment of nutritional status in pediatric patients with recent cancer diagnosis.

    PubMed

    Vázquez de la Torre, Mayra Jezabel; Stein, Katja; Vásquez Garibay, Edgar Manuel; Kumazawa Ichikawa, Miguel Roberto; Troyo Sanromán, Rogelio; Salcedo Flores, Alicia Guadalupe; Sánchez Zubieta, Fernando Antonio

    2017-10-24

    The subjective global assessment (SGA) is a simple, sensitive tool used to identify nutritional risk. It is widely used in the adult population, but there is little evidence on its effectiveness in children with cancer. This cross-sectional study was undertaken to demonstrate significant correlation between a simplified version of the Patient-Generated SGA (PG-SGA) and anthropometric assessment to identify nutritional status in children recently diagnosed with cancer. The nutritional status of 70 pediatric cancer patients was assessed with the PG-SGA and anthropometric measurements. The relation between the assessments was tested with ANOVA, independent samples t-test, Kappa statistic, and non-parametric Spearman and Kendall correlation coefficient. The PG-SGA divided the patients into four groups: well nourished, mildly, moderately and severely malnourished. The prevalence of malnutrition according to the PG-SGA was 21.4%. The correlations (r ≥ 0.300, p < 0.001) and the concordance (k ≥ 0.327, p < 0.001) between the PG-SGA and anthropometric indicators were moderate and significant. The results indicate that the PG-SGA is a valid tool for assessing nutritional status in hospitalized children recently diagnosed with cancer. It is important to emphasize that the subjective assessment does not detect growth retardation, overweight or obesity.

  1. Clinical validation of the 50 gene AmpliSeq Cancer Panel V2 for use on a next generation sequencing platform using formalin fixed, paraffin embedded and fine needle aspiration tumour specimens.

    PubMed

    Rathi, Vivek; Wright, Gavin; Constantin, Diana; Chang, Siok; Pham, Huong; Jones, Kerryn; Palios, Atha; Mclachlan, Sue-Anne; Conron, Matthew; McKelvie, Penny; Williams, Richard

    2017-01-01

    The advent of massively parallel sequencing has caused a paradigm shift in the ways cancer is treated, as personalised therapy becomes a reality. More and more laboratories are looking to introduce next generation sequencing (NGS) as a tool for mutational analysis, as this technology has many advantages compared to conventional platforms like Sanger sequencing. In Australia all massively parallel sequencing platforms are still considered in-house in vitro diagnostic tools by the National Association of Testing Authorities (NATA) and a comprehensive analytical validation of all assays, and not just mere verification, is a strict requirement before accreditation can be granted for clinical testing on these platforms. Analytical validation of assays on NGS platforms can prove to be extremely challenging for pathology laboratories. Although there are many affordable and easily accessible NGS instruments available, there are no standardised guidelines as yet for clinical validation of NGS assays. We present an accreditation development procedure that was both comprehensive and applicable in a setting of hospital laboratory for NGS services. This approach may also be applied to other NGS applications in service laboratories. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.

  2. Automation of Educational Tasks for Academic Radiology.

    PubMed

    Lamar, David L; Richardson, Michael L; Carlson, Blake

    2016-07-01

    The process of education involves a variety of repetitious tasks. We believe that appropriate computer tools can automate many of these chores, and allow both educators and their students to devote a lot more of their time to actual teaching and learning. This paper details tools that we have used to automate a broad range of academic radiology-specific tasks on Mac OS X, iOS, and Windows platforms. Some of the tools we describe here require little expertise or time to use; others require some basic knowledge of computer programming. We used TextExpander (Mac, iOS) and AutoHotKey (Win) for automated generation of text files, such as resident performance reviews and radiology interpretations. Custom statistical calculations were performed using TextExpander and the Python programming language. A workflow for automated note-taking was developed using Evernote (Mac, iOS, Win) and Hazel (Mac). Automated resident procedure logging was accomplished using Editorial (iOS) and Python. We created three variants of a teaching session logger using Drafts (iOS) and Pythonista (iOS). Editorial and Drafts were used to create flashcards for knowledge review. We developed a mobile reference management system for iOS using Editorial. We used the Workflow app (iOS) to automatically generate a text message reminder for daily conferences. Finally, we developed two separate automated workflows-one with Evernote (Mac, iOS, Win) and one with Python (Mac, Win)-that generate simple automated teaching file collections. We have beta-tested these workflows, techniques, and scripts on several of our fellow radiologists. All of them expressed enthusiasm for these tools and were able to use one or more of them to automate their own educational activities. Appropriate computer tools can automate many educational tasks, and thereby allow both educators and their students to devote a lot more of their time to actual teaching and learning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  3. Second Generation Crop Yield Models Review

    NASA Technical Reports Server (NTRS)

    Hodges, T. (Principal Investigator)

    1982-01-01

    Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.

  4. Recent Developments in the Design, Capabilities and Autonomous Operations of a Lightweight Surface Manipulation System and Test-bed

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Jones, Thomas C.; Doggett, W. R.; Brady, Jeffrey S.; Berry, Felecia C.; Ganoe, George G.; Anderson, Eric; King, Bruce D.; Mercer, David C.

    2011-01-01

    The first generation of a versatile high performance device for performing payload handling and assembly operations on planetary surfaces, the Lightweight Surface Manipulation System (LSMS), has been designed and built. Over the course of its development, conventional crane type payload handling configurations and operations have been successfully demonstrated and the range of motion, types of operations and the versatility greatly expanded. This enhanced set of 1st generation LSMS hardware is now serving as a laboratory test-bed allowing the continuing development of end effectors, operational techniques and remotely controlled and automated operations. This paper describes the most recent LSMS and test-bed development activities, that have focused on two major efforts. The first effort was to complete a preliminary design of the 2nd generation LSMS that has the capability for limited mobility and can reposition itself between lander decks, mobility chassis, and fixed base locations. A major portion of this effort involved conducting a study to establish the feasibility of, and define, the specifications for a lightweight cable-drive waist joint. The second effort was to continue expanding the versatility and autonomy of large planetary surface manipulators using the 1st generation LSMS as a test-bed. This has been accomplished by increasing manipulator capabilities and efficiencies through both design changes and tool and end effector development. A software development effort has expanded the operational capabilities of the LSMS test-bed to include; autonomous operations based on stored paths, use of a vision system for target acquisition and tracking, and remote command and control over a communications bridge.

  5. JIMM: the next step for mission-level models

    NASA Astrophysics Data System (ADS)

    Gump, Jamieson; Kurker, Robert G.; Nalepka, Joseph P.

    2001-09-01

    The (Simulation Based Acquisition) SBA process is one in which the planning, design, and test of a weapon system or other product is done through the more effective use of modeling and simulation, information technology, and process improvement. This process results in a product that is produced faster, cheaper, and more reliably than its predecessors. Because the SBA process requires realistic and detailed simulation conditions, it was necessary to develop a simulation tool that would provide a simulation environment acceptable for doing SBA analysis. The Joint Integrated Mission Model (JIMM) was created to help define and meet the analysis, test and evaluation, and training requirements of a Department of Defense program utilizing SBA. Through its generic nature of representing simulation entities, its data analysis capability, and its robust configuration management process, JIMM can be used to support a wide range of simulation applications as both a constructive and a virtual simulation tool. JIMM is a Mission Level Model (MLM). A MLM is capable of evaluating the effectiveness and survivability of a composite force of air and space systems executing operational objectives in a specific scenario against an integrated air and space defense system. Because MLMs are useful for assessing a system's performance in a realistic, integrated, threat environment, they are key to implementing the SBA process. JIMM is a merger of the capabilities of one legacy model, the Suppressor MLM, into another, the Simulated Warfare Environment Generator (SWEG) MLM. By creating a more capable MLM, JIMM will not only be a tool to support the SBA initiative, but could also provide the framework for the next generation of MLMs.

  6. voom: precision weights unlock linear model analysis tools for RNA-seq read counts

    PubMed Central

    2014-01-01

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249

  7. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    PubMed

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  8. SU-E-T-100: Designing a QA Tool for Enhance Dynamic Wedges Based On Dynalog Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yousuf, A; Hussain, A

    2014-06-01

    Purpose: A robust quality assurance (QA) program for computer controlled enhanced dynamic wedge (EDW) has been designed and tested. Calculations to perform such QA test is based upon the EDW dynamic log files generated during dose delivery. Methods: Varian record and verify system generates dynamic log (dynalog) files during dynamic dose delivery. The system generated dynalog files contain information such as date and time of treatment, energy, monitor units, wedge orientation, and type of treatment. It also contains the expected calculated segmented treatment tables (STT) and the actual delivered STT for the treatment delivery as a verification record. These filesmore » can be used to assess the integrity and precision of the treatment plan delivery. The plans were delivered with a 6 MV beam from a Varian linear accelerator. For available EDW angles (10°, 15°, 20°, 25°, 30°, 45°, and 60°) Varian STT values were used to manually calculate monitor units for each segment. It can also be used to calculate the EDW factors. Independent verification of fractional MUs per segment was performed against those generated from dynalog files. The EDW factors used to calculate MUs in TPS were dosimetrically verified in solid water phantom with semiflex chamber on central axis. Results: EDW factors were generated from the STT provided by Varian and verified against practical measurements. The measurements were in agreement of the order of 1 % to the calculated EDW data. Variation between the MUs per segment obtained from dynalog files and those manually calculated was found to be less than 2%. Conclusion: An efficient and easy tool to perform routine QA procedure of EDW is suggested. The method can be easily implemented in any institution without a need for expensive QA equipment. An error of the order of ≥2% can be easily detected.« less

  9. Iterating between Tools to Create and Edit Visualizations.

    PubMed

    Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah

    2017-01-01

    A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.

  10. Use of immuno assays during the development of a Hemophilus influenzae type b vaccine for technology transfer to emerging vaccine manufacturers.

    PubMed

    Hamidi, Ahd; Kreeftenberg, Hans

    2014-01-01

    Quality control of Hemophilus Influenzae type b (Hib) conjugate vaccines is mainly dependent on physicochemical methods. Overcoming sample matrix interference when using physicochemical tests is very challenging, these tests are therefore only used to test purified samples of polysaccharide, protein, bulk conjugate, and final product. For successful development of a Hib conjugate vaccine, several ELISA (enzyme-linked immunosorbent assay) methods were needed as an additional tool to enable testing of in process (IP) samples. In this paper, three of the ELISA's that have been very valuable during the process development, implementation and scaling up are highlighted. The PRP-ELISA, was a very efficient tool in testing in process (IP) samples generated during the development of the cultivation and purification process of the Hib-polysaccharide. The antigenicity ELISA, was used to confirm the covalent linkage of PRP and TTd in the conjugate. The anti-PRP IgG ELISA was developed as part of the immunogenicity test, used to demonstrate the ability of the Hib conjugate vaccine to elicit a T-cell dependent immune response in mice. ELISA methods are relatively cheap and easy to implement and therefore very useful during the development of polysaccharide conjugate vaccines.

  11. A GIS Tool for evaluating and improving NEXRAD and its application in distributed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Srinivasan, R.

    2008-12-01

    In this study, a user friendly GIS tool was developed for evaluating and improving NEXRAD using raingauge data. This GIS tool can automatically read in raingauge and NEXRAD data, evaluate the accuracy of NEXRAD for each time unit, implement several geostatistical methods to improve the accuracy of NEXRAD through raingauge data, and output spatial precipitation map for distributed hydrologic model. The geostatistical methods incorporated in this tool include Simple Kriging with varying local means, Kriging with External Drift, Regression Kriging, Co-Kriging, and a new geostatistical method that was newly developed by Li et al. (2008). This tool was applied in two test watersheds at hourly and daily temporal scale. The preliminary cross-validation results show that incorporating raingauge data to calibrate NEXRAD can pronouncedly change the spatial pattern of NEXRAD and improve its accuracy. Using different geostatistical methods, the GIS tool was applied to produce long term precipitation input for a distributed hydrologic model - Soil and Water Assessment Tool (SWAT). Animated video was generated to vividly illustrate the effect of using different precipitation input data on distributed hydrologic modeling. Currently, this GIS tool is developed as an extension of SWAT, which is used as water quantity and quality modeling tool by USDA and EPA. The flexible module based design of this tool also makes it easy to be adapted for other hydrologic models for hydrological modeling and water resources management.

  12. Java Tool Framework for Automation of Hardware Commissioning and Maintenance Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, J C; Fisher, J M; Gordon, J B

    2007-10-02

    The National Ignition Facility (NIF) is a 192-beam laser system designed to study high energy density physics. Each beam line contains a variety of line replaceable units (LRUs) that contain optics, stepping motors, sensors and other devices to control and diagnose the laser. During commissioning and subsequent maintenance of the laser, LRUs undergo a qualification process using the Integrated Computer Control System (ICCS) to verify and calibrate the equipment. The commissioning processes are both repetitive and tedious when we use remote manual computer controls, making them ideal candidates for software automation. Maintenance and Commissioning Tool (MCT) software was developed tomore » improve the efficiency of the qualification process. The tools are implemented in Java, leveraging ICCS services and CORBA to communicate with the control devices. The framework provides easy-to-use mechanisms for handling configuration data, task execution, task progress reporting, and generation of commissioning test reports. The tool framework design and application examples will be discussed.« less

  13. RNA-seq Data: Challenges in and Recommendations for Experimental Design and Analysis.

    PubMed

    Williams, Alexander G; Thomas, Sean; Wyman, Stacia K; Holloway, Alisha K

    2014-10-01

    RNA-seq is widely used to determine differential expression of genes or transcripts as well as identify novel transcripts, identify allele-specific expression, and precisely measure translation of transcripts. Thoughtful experimental design and choice of analysis tools are critical to ensure high-quality data and interpretable results. Important considerations for experimental design include number of replicates, whether to collect paired-end or single-end reads, sequence length, and sequencing depth. Common analysis steps in all RNA-seq experiments include quality control, read alignment, assigning reads to genes or transcripts, and estimating gene or transcript abundance. Our aims are two-fold: to make recommendations for common components of experimental design and assess tool capabilities for each of these steps. We also test tools designed to detect differential expression, since this is the most widespread application of RNA-seq. We hope that these analyses will help guide those who are new to RNA-seq and will generate discussion about remaining needs for tool improvement and development. Copyright © 2014 John Wiley & Sons, Inc.

  14. The Space Environmental Impact System

    NASA Astrophysics Data System (ADS)

    Kihn, E. A.

    2009-12-01

    The Space Environmental Impact System (SEIS) is an operational tool for incorporating environmental data sets into DoD Modeling and Simulation (M&S) which allows for enhanced decision making regarding acquisitions, testing, operations and planning. The SEIS system creates, from the environmental archives and developed rule-base, a tool for describing the effects of the space environment on particular military systems, both historically and in real-time. The system uses data available over the web, and in particular data provided by NASA’s virtual observatory network, as well as modeled data generated specifically for this purpose. The rule base system developed to support SEIS is an open XML based model which can be extended to events from any environmental domain. This presentation will show how the SEIS tool allows users to easily and accurately evaluate the effect of space weather in terms that are meaningful to them as well as discuss the relevant standards used in its construction and go over lessons learned from fielding an operational environmental decision tool.

  15. SOAP-T: a tool to study the light curve and radial velocity of a system with a transiting planet and a rotating spotted star

    NASA Astrophysics Data System (ADS)

    Oshagh, M.; Boisse, I.; Boué, G.; Montalto, M.; Santos, N. C.; Bonfils, X.; Haghighipour, N.

    2013-01-01

    We present an improved version of SOAP named "SOAP-T", which can generate the radial velocity variations and light curves for systems consisting of a rotating spotted star with a transiting planet. This tool can be used to study the anomalies inside transit light curves and the Rossiter-McLaughlin effect, to better constrain the orbital configuration and properties of planetary systems and the active zones of their host stars. Tests of the code are presented to illustrate its performance and to validate its capability when compared with analytical models and real data. Finally, we apply SOAP-T to the active star, HAT-P-11, observed by the NASA Kepler space telescope and use this system to discuss the capability of this tool in analyzing light curves for the cases where the transiting planet overlaps with the star's spots. The tool's public interface is available at http://www.astro.up.pt/resources/soap-t/

  16. Numerical tool for tsunami risk assessment in the southern coast of Dominican Republic

    NASA Astrophysics Data System (ADS)

    Macias Sanchez, J.; Llorente Isidro, M.; Ortega, S.; Gonzalez Vida, J. M., Sr.; Castro, M. J.

    2016-12-01

    The southern coast of Dominican Republic is a very populated region, with several important cities including Santo Domingo, its capital. Important activities are rooted in the southern coast including tourism, industry, commercial ports, and, energy facilities, among others. According to historical reports, it has been impacted by big earthquakes accompanied by tsunamis as in Azua in 1751 and recently Pedernales in 2010, but their sources are not clearly identified. The aim of the present work is to develop a numerical tool to simulate the impact in the southern coast of the Dominican Republic of tsunamis generated in the Caribbean Sea. This tool, based on the Tsunami-HySEA model from EDANYA group (University of Malaga, Spain), could be used in the framework of a Tsunami Early Warning Systems due the very short computing times when only propagation is computed or it could be used to assess inundation impact, computing inundation with a initial 5 meter resolution. Numerical results corresponding to three theoretical sources are used to test the numerical tool.

  17. Modelling the urban water cycle as an integrated part of the city: a review.

    PubMed

    Urich, Christian; Rauch, Wolfgang

    2014-01-01

    In contrast to common perceptions, the urban water infrastructure system is a complex and dynamic system that is constantly evolving and adapting to changes in the urban environment, to sustain existing services and provide additional ones. Instead of simplifying urban water infrastructure to a static system that is decoupled from its urban context, new management strategies use the complexity of the system to their advantage by integrating centralised with decentralised solutions and explicitly embedding water systems into their urban form. However, to understand and test possible adaptation strategies, urban water modelling tools are required to support exploration of their effectiveness as the human-technology-environment system coevolves under different future scenarios. The urban water modelling community has taken first steps to developing these new modelling tools. This paper critically reviews the historical development of urban water modelling tools and provides a summary of the current state of integrated modelling approaches. It reflects on the challenges that arise through the current practice of coupling urban water management tools with urban development models and discusses a potential pathway towards a new generation of modelling tools.

  18. Metrics for the National SCADA Test Bed Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, Philip A.; Mortensen, J.; Dagle, Jeffery E.

    2008-12-05

    The U.S. Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) National SCADA Test Bed (NSTB) Program is providing valuable inputs into the electric industry by performing topical research and development (R&D) to secure next generation and legacy control systems. In addition, the program conducts vulnerability and risk analysis, develops tools, and performs industry liaison, outreach and awareness activities. These activities will enhance the secure and reliable delivery of energy for the United States. This report will describe metrics that could be utilized to provide feedback to help enhance the effectiveness of the NSTB Program.

  19. Evaluation of computer-generated guidelines for companions of paediatric patients undergoing chemotherapy.

    PubMed

    Lopes, Vagner José; Shmeil, Marcos Augusto Hochuli

    2017-04-27

    To compare computer-generated guidelines with and without the use of a Clinical Decision Support System - Oncology Care and Healthcare for Chemotherapy Patients, for the caregivers of children undergoing chemotherapy. This is a descriptive, evaluative, and quantitative study conducted at a paediatrics hospital in Curitiba, Paraná, Brazil, from December 2015 to January 2016. The sample consisted of 58 participants divided into two groups: Group 1, without the aid of software, and Group 2, with the aid of the software. The data were analysed using the Mann-Whitney U test. The guidelines revealed a statistical significance (p<0.05), with a prevalence of a higher concordance average in Group 2 in comparison with Group 1. Computer-generated guidelines are a valuable qualitative support tool for nurses.

  20. On the virtues of automated quantitative structure-activity relationship: the new kid on the block.

    PubMed

    de Oliveira, Marcelo T; Katekawa, Edson

    2018-02-01

    Quantitative structure-activity relationship (QSAR) has proved to be an invaluable tool in medicinal chemistry. Data availability at unprecedented levels through various databases have collaborated to a resurgence in the interest for QSAR. In this context, rapid generation of quality predictive models is highly desirable for hit identification and lead optimization. We showcase the application of an automated QSAR approach, which randomly selects multiple training/test sets and utilizes machine-learning algorithms to generate predictive models. Results demonstrate that AutoQSAR produces models of improved or similar quality to those generated by practitioners in the field but in just a fraction of the time. Despite the potential of the concept to the benefit of the community, the AutoQSAR opportunity has been largely undervalued.

  1. Complementing in vitro screening assays with in silico ...

    EPA Pesticide Factsheets

    High-throughput in vitro assays offer a rapid, cost-efficient means to screen thousands of chemicals across hundreds of pathway-based toxicity endpoints. However, one main concern involved with the use of in vitro assays is the erroneous omission of chemicals that are inactive under assay conditions but that can generate active metabolites under in vivo conditions. To address this potential issue, a case study will be presented to demonstrate the use of in silico tools to identify inactive parents with the ability to generate active metabolites. This case study used the results from an orthogonal assay designed to improve confidence in the identification of active chemicals tested across eighteen estrogen receptor (ER)-related in vitro assays by accounting for technological limitations inherent within each individual assay. From the 1,812 chemicals tested within the orthogonal assay, 1,398 were considered inactive. These inactive chemicals were analyzed using Chemaxon Metabolizer software to predict the first and second generation metabolites. From the nearly 1,400 inactive chemicals, over 2,200 first-generation (i.e., primary) metabolites and over 5,500 second-generation (i.e., secondary) metabolites were predicted. Nearly 70% of primary metabolites were immediately detoxified or converted to other metabolites, while over 70% of secondary metabolites remained stable. Among these predicted metabolites, those that are most likely to be produced and remain

  2. Next-generation sequencing for endocrine cancers: Recent advances and challenges.

    PubMed

    Suresh, Padmanaban S; Venkatesh, Thejaswini; Tsutsumi, Rie; Shetty, Abhishek

    2017-05-01

    Contemporary molecular biology research tools have enriched numerous areas of biomedical research that address challenging diseases, including endocrine cancers (pituitary, thyroid, parathyroid, adrenal, testicular, ovarian, and neuroendocrine cancers). These tools have placed several intriguing clues before the scientific community. Endocrine cancers pose a major challenge in health care and research despite considerable attempts by researchers to understand their etiology. Microarray analyses have provided gene signatures from many cells, tissues, and organs that can differentiate healthy states from diseased ones, and even show patterns that correlate with stages of a disease. Microarray data can also elucidate the responses of endocrine tumors to therapeutic treatments. The rapid progress in next-generation sequencing methods has overcome many of the initial challenges of these technologies, and their advantages over microarray techniques have enabled them to emerge as valuable aids for clinical research applications (prognosis, identification of drug targets, etc.). A comprehensive review describing the recent advances in next-generation sequencing methods and their application in the evaluation of endocrine and endocrine-related cancers is lacking. The main purpose of this review is to illustrate the concepts that collectively constitute our current view of the possibilities offered by next-generation sequencing technological platforms, challenges to relevant applications, and perspectives on the future of clinical genetic testing of patients with endocrine tumors. We focus on recent discoveries in the use of next-generation sequencing methods for clinical diagnosis of endocrine tumors in patients and conclude with a discussion on persisting challenges and future objectives.

  3. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  4. Commerce and genetic diagnostics.

    PubMed

    Silverman, Paul H

    1995-01-01

    The revolution in molecular biology and molecular genetics has begun to reveal the sequence of events that links genes and disease. As a result of activities such as the Human Genome Project, a parallel revolution in technology is bringing nearer to hand the possibility of readily available genetic diagnostics. Genetic testing services have begun to move out of the academic medical centers and into the private enterprise arena. Under these circumstances it is important to understand the factors affecting the availability and application of this powerful predictive tool in a for-profit mode. How does the marketplace encourage or discourage genetic testing? Will the same market influences that generate pharmaceutical sales be operating to "sell" genetic tests?

  5. An electronic clinical decision support tool to assist primary care providers in cardiovascular disease risk management: development and mixed methods evaluation.

    PubMed

    Peiris, David P; Joshi, Rohina; Webster, Ruth J; Groenestein, Patrick; Usherwood, Tim P; Heeley, Emma; Turnbull, Fiona M; Lipman, Alexandra; Patel, Anushka A

    2009-12-17

    Challenges remain in translating the well-established evidence for management of cardiovascular disease (CVD) risk into clinical practice. Although electronic clinical decision support (CDS) systems are known to improve practitioner performance, their development in Australian primary health care settings is limited. Study aims were to (1) develop a valid CDS tool that assists Australian general practitioners (GPs) in global CVD risk management, and (2) preliminarily evaluate its acceptability to GPs as a point-of-care resource for both general and underserved populations. CVD risk estimation (based on Framingham algorithms) and risk-based management advice (using recommendations from six Australian guidelines) were programmed into a software package. Tool validation: Data from 137 patients attending a physician's clinic were analyzed to compare the tool's risk scores with those obtained from an independently programmed algorithm in a separate statistics package. The tool's management advice was compared with a physician's recommendations based on a manual review of the guidelines. Field test: The tool was then tested with 21 GPs from eight general practices and three Aboriginal Medical Services. Customized CDS-based recommendations were generated for 200 routinely attending patients (33% Aboriginal) using information extracted from the health record by a research assistant. GPs reviewed these recommendations during each consultation. Changes in CVD risk factor measurement and management were recorded. In-depth interviews with GPs were conducted. Validation testing: the tool's risk assessment algorithm correlated very highly with the independently programmed version in the separate statistics package (intraclass correlation coefficient 0.999). For management advice, there were only two cases of disagreement between the tool and the physician. Field test: GPs found 77% (153/200) of patient outputs easy to understand and agreed with screening and prescribing recommendations in 72% and 64% of outputs, respectively; 26% of patients had their CVD risk factor history updated; 73% had at least one CVD risk factor measured or tests ordered. For people assessed at high CVD risk (n = 82), 10% and 9%, respectively, had lipid-lowering and BP-lowering medications commenced or dose adjustments made, while 7% newly commenced anti-platelet medications. Three key qualitative findings emerged: (1) GPs found the tool enabled a systematic approach to care; (2) the tool greatly influenced CVD risk communication; (3) successful implementation into routine care would require integration with practice software, minimal data entry, regular revision with updated guidelines, and a self-auditing feature. There were no substantive differences in study findings for Aboriginal Medical Services GPs, and the tool was generally considered appropriate for use with Aboriginal patients. A fully-integrated, self-populating, and potentially Internet-based CDS tool could contribute to improved global CVD risk management in Australian primary health care. The findings from this study will inform a large-scale trial intervention.

  6. A study of large scale gust generation in a small scale atmospheric wind tunnel with applications to Micro Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Roadman, Jason Markos

    Modern technology operating in the atmospheric boundary layer can always benefit from more accurate wind tunnel testing. While scaled atmospheric boundary layer tunnels have been well developed, tunnels replicating portions of the atmospheric boundary layer turbulence at full scale are a comparatively new concept. Testing at full-scale Reynolds numbers with full-scale turbulence in an "atmospheric wind tunnel" is sought. Many programs could utilize such a tool including Micro Aerial Vehicle(MAV) development, the wind energy industry, fuel efficient vehicle design, and the study of bird and insect flight, to name just a few. The small scale of MAVs provide the somewhat unique capability of full scale Reynolds number testing in a wind tunnel. However, that same small scale creates interactions under real world flight conditions, atmospheric gusts for example, that lead to a need for testing under more complex flows than the standard uniform flow found in most wind tunnels. It is for these reasons that MAVs are used as the initial testing application for the atmospheric gust tunnel. An analytical model for both discrete gusts and a continuous spectrum of gusts is examined. Then, methods for generating gusts in agreement with that model are investigated. Previously used methods are reviewed and a gust generation apparatus is designed. Expected turbulence and gust characteristics of this apparatus are compared with atmospheric data. The construction of an active "gust generator" for a new atmospheric tunnel is reviewed and the turbulence it generates is measured utilizing single and cross hot wires. Results from this grid are compared to atmospheric turbulence and it is shown that various gust strengths can be produced corresponding to weather ranging from calm to quite gusty. An initial test is performed in the atmospheric wind tunnel whereby the effects of various turbulence conditions on transition and separation on the upper surface of a MAV wing is investigated using the surface oil flow visualization technique.

  7. NetList(+): A simple interface language for chip design

    NASA Astrophysics Data System (ADS)

    Wuu, Tzyh-Yung

    1991-04-01

    NetList (+) is a design specification language developed at MOSIS for rapid turn-around cell-based ASIC prototyping. By using NetList (+), a uniform representation is achieved for the specification, simulation, and physical description of a design. The goal is to establish an interfacing methodology between design specification and independent computer aided design tools. Designers need only to specify a system by writing a corresponding netlist. This netlist is used for both functional simulation and timing simulation. The same netlist is also used to derive the low level physical tools to generate layout. Another goal of using NetList (+) is to generate parts of a design by running it through different kinds of placement and routing (P and R) tools. For example some parts of a design will be generated by standard cell P and R tools. Other parts may be generated by a layout tiler; i.e., datapath compiler, RAM/ROM generator, or PLA generator. Finally all different parts of a design can be integrated by general block P and R tools as a single chip. The NetList (+) language can actually act as an interface among tools. Section 2 shows a flowchart to illustrate the NetList (+) system and its relation with other related design tools. Section 3 shows how to write a NetList (+) description from the block diagram of a circuit. In section 4 discusses how to prepare a cell library or several cell libraries for a design system. Section 5 gives a few designs by NetList (+) and shows their simulation and layout results.

  8. Multicenter Validation of a Customizable Scoring Tool for Selection of Trainees for a Residency or Fellowship Program. The EAST-IST Study.

    PubMed

    Bosslet, Gabriel T; Carlos, W Graham; Tybor, David J; McCallister, Jennifer; Huebert, Candace; Henderson, Ashley; Miles, Matthew C; Twigg, Homer; Sears, Catherine R; Brown, Cynthia; Farber, Mark O; Lahm, Tim; Buckley, John D

    2017-04-01

    Few data have been published regarding scoring tools for selection of postgraduate medical trainee candidates that have wide applicability. The authors present a novel scoring tool developed to assist postgraduate programs in generating an institution-specific rank list derived from selected elements of the U.S. Electronic Residency Application System (ERAS) application. The authors developed and validated an ERAS and interview day scoring tool at five pulmonary and critical care fellowship programs: the ERAS Application Scoring Tool-Interview Scoring Tool. This scoring tool was then tested for intrarater correlation versus subjective rankings of ERAS applications. The process for development of the tool was performed at four other institutions, and it was performed alongside and compared with the "traditional" ranking methods at the five programs and compared with the submitted National Residency Match Program rank list. The ERAS Application Scoring Tool correlated highly with subjective faculty rankings at the primary institution (average Spearman's r = 0.77). The ERAS Application Scoring Tool-Interview Scoring Tool method correlated well with traditional ranking methodology at all five institutions (Spearman's r = 0.54, 0.65, 0.72, 0.77, and 0.84). This study validates a process for selecting and weighting components of the ERAS application and interview day to create a customizable, institution-specific tool for ranking candidates to postgraduate medical education programs. This scoring system can be used in future studies to compare the outcomes of fellowship training.

  9. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    PubMed Central

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050

  10. Video Games as a Training Tool to Prepare the Next Generation of Cyber Warriors

    DTIC Science & Technology

    2014-10-01

    2. REPORT TYPE N/A 3. DATES COVERED - 4 . TITLE AND SUBTITLE Video Games as a Training Tool to Prepare the Next Generation of Cyber Warriors...CYBERSECURITY WORKFORCE SHORTAGE .......................................................................... 3 4 1.1 GREATER CYBERSECURITY EDUCATION IS... 4 6 2.1 HOW VIDEO GAMES CAN BE EFFECTIVE LEARNING TOOLS

  11. The Exercise: An Exercise Generator Tool for the SOURCe Project

    ERIC Educational Resources Information Center

    Kakoyianni-Doa, Fryni; Tziafa, Eleni; Naskos, Athanasios

    2016-01-01

    The Exercise, an Exercise generator in the SOURCe project, is a tool that complements the properties and functionalities of the SOURCe project, which includes the search engine for the Searchable Online French-Greek parallel corpus for the UniveRsity of Cyprus (SOURCe) (Kakoyianni-Doa & Tziafa, 2013), the PENCIL (an alignment tool)…

  12. Use of the MATRIXx Integrated Toolkit on the Microwave Anisotropy Probe Attitude Control System

    NASA Technical Reports Server (NTRS)

    Ward, David K.; Andrews, Stephen F.; McComas, David C.; ODonnell, James R., Jr.

    1999-01-01

    Recent advances in analytical software tools allow the analysis, simulation, flight code, and documentation of an algorithm to be generated from a single source, all within one integrated analytical design package. NASA's Microwave Anisotropy Probe project has used one such package, Integrated Systems' MATRIXx suite, in the design of the spacecraft's Attitude Control System. The project's experience with the linear analysis, simulation, code generation, and documentation tools will be presented and compared with more traditional development tools. In particular, the quality of the flight software generated will be examined in detail. Finally, lessons learned on each of the tools will be shared.

  13. WE-AB-207B-07: Dose Cloud: Generating “Big Data” for Radiation Therapy Treatment Plan Optimization Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkerts, MM; University of California San Diego, La Jolla, California; Long, T

    Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less

  14. Statechart Analysis with Symbolic PathFinder

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.

    2012-01-01

    We report here on our on-going work that addresses the automated analysis and test case generation for software systems modeled using multiple Statechart formalisms. The work is motivated by large programs such as NASA Exploration, that involve multiple systems that interact via safety-critical protocols and are designed with different Statechart variants. To verify these safety-critical systems, we have developed Polyglot, a framework for modeling and analysis of model-based software written using different Statechart formalisms. Polyglot uses a common intermediate representation with customizable Statechart semantics and leverages the analysis and test generation capabilities of the Symbolic PathFinder tool. Polyglot is used as follows: First, the structure of the Statechart model (expressed in Matlab Stateflow or Rational Rhapsody) is translated into a common intermediate representation (IR). The IR is then translated into Java code that represents the structure of the model. The semantics are provided as "pluggable" modules.

  15. Upgrading the Digital Electronics of the PEP-II Bunch Current Monitors at the Stanford Linear Accelerator Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kline, Josh; /SLAC

    2006-08-28

    The testing of the upgrade prototype for the bunch current monitors (BCMs) in the PEP-II storage rings at the Stanford Linear Accelerator Center (SLAC) is the topic of this paper. Bunch current monitors are used to measure the charge in the electron/positron bunches traveling in particle storage rings. The BCMs in the PEP-II storage rings need to be upgraded because components of the current system have failed and are known to be failure prone with age, and several of the integrated chips are no longer produced making repairs difficult if not impossible. The main upgrade is replacing twelve old (1995)more » field programmable gate arrays (FPGAs) with a single Virtex II FPGA. The prototype was tested using computer synthesis tools, a commercial signal generator, and a fast pulse generator.« less

  16. Reliable quantum certification of photonic state preparations

    PubMed Central

    Aolita, Leandro; Gogolin, Christian; Kliesch, Martin; Eisert, Jens

    2015-01-01

    Quantum technologies promise a variety of exciting applications. Even though impressive progress has been achieved recently, a major bottleneck currently is the lack of practical certification techniques. The challenge consists of ensuring that classically intractable quantum devices perform as expected. Here we present an experimentally friendly and reliable certification tool for photonic quantum technologies: an efficient certification test for experimental preparations of multimode pure Gaussian states, pure non-Gaussian states generated by linear-optical circuits with Fock-basis states of constant boson number as inputs, and pure states generated from the latter class by post-selecting with Fock-basis measurements on ancillary modes. Only classical computing capabilities and homodyne or hetorodyne detection are required. Minimal assumptions are made on the noise or experimental capabilities of the preparation. The method constitutes a step forward in many-body quantum certification, which is ultimately about testing quantum mechanics at large scales. PMID:26577800

  17. Towards a New Generation of Agricultural System Data, Models and Knowledge Products: Design and Improvement

    NASA Technical Reports Server (NTRS)

    Antle, John M.; Basso, Bruno; Conant, Richard T.; Godfray, H. Charles J.; Jones, James W.; Herrero, Mario; Howitt, Richard E.; Keating, Brian A.; Munoz-Carpena, Rafael; Rosenzweig, Cynthia

    2016-01-01

    This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.

  18. Towards a new generation of agricultural system data, models and knowledge products: Design and improvement.

    PubMed

    Antle, John M; Basso, Bruno; Conant, Richard T; Godfray, H Charles J; Jones, James W; Herrero, Mario; Howitt, Richard E; Keating, Brian A; Munoz-Carpena, Rafael; Rosenzweig, Cynthia; Tittonell, Pablo; Wheeler, Tim R

    2017-07-01

    This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.

  19. Ice Shapes on a Tail Rotor

    NASA Technical Reports Server (NTRS)

    Kreeger, Richard E.; Tsao, Jen-Ching

    2014-01-01

    Testing of a thermally-protected helicopter rotor in the Icing Research Tunnel (IRT) was completed. Data included inter-cycle and cold blade ice shapes. Accreted ice shapes were thoroughly documented, including tracing, scanning and photographing. This was the first time this scanning capability was used outside of NASA. This type of data has never been obtained for a rotorcraft before. This data will now be used to validate the latest generation of icing analysis tools.

  20. GlassForm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-09-16

    GlassForm is a software tool for generating preliminary waste glass formulas for a given waste stream. The software is useful because it reduces the number of verification melts required to develop a suitable additive composition. The software includes property models that calculate glass properties of interest from the chemical composition of the waste glass. The software includes property models for glass viscosity, electrical conductivity, glass transition temperature, and leach resistance as measured by the 7-day product consistency test (PCT).

  1. Turbine Aeration Design Software for Mitigating Adverse Environmental Impacts Resulting From Conventional Hydropower Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulliver, John S.

    2015-03-01

    Conventional hydropower turbine aeration test-bed for computational routines and software tools for improving environmental mitigation technologies for conventional hydropower systems. In achieving this goal, we have partnered with Alstom, a global leader in energy technology development and United States power generation, with additional funding from the Initiative for Renewable Energy and the Environment (IREE) and the College of Science and Engineering (CSE) at the UMN

  2. HullBUG Technology Development for Underwater Hull Cleaning

    DTIC Science & Technology

    2013-08-23

    drawings of the Grooming Tool would be generated. These drawings would be vended out to approved vendors for quoting. Quotes would be obtained for...quantity purchased determined, parts would be vended out and manufactured. These parts would then be assembled at SRC. A test program would then follow...tolerances added to enable quoting by commercial machine shops. A top assembly drawing was created that identified all machined parts as well as all

  3. Tools for the diagnosis of hepatitis C virus infection and hepatic fibrosis staging

    PubMed Central

    Saludes, Verónica; González, Victoria; Planas, Ramon; Matas, Lurdes; Ausina, Vicente; Martró, Elisa

    2014-01-01

    Hepatitis C virus (HCV) infection represents a major public health issue. Hepatitis C can be cured by therapy, but many infected individuals are unaware of their status. Effective HCV screening, fast diagnosis and characterization, and hepatic fibrosis staging are highly relevant for controlling transmission, treating infected patients and, consequently, avoiding end-stage liver disease. Exposure to HCV can be determined with high sensitivity and specificity with currently available third generation serology assays. Additionally, the use of point-of-care tests can increase HCV screening opportunities. However, active HCV infection must be confirmed by direct diagnosis methods. Additionally, HCV genotyping is required prior to starting any treatment. Increasingly, high-volume clinical laboratories use different types of automated platforms, which have simplified sample processing, reduced hands-on-time, minimized contamination risks and human error and ensured full traceability of results. Significant advances have also been made in the field of fibrosis stage assessment with the development of non-invasive methods, such as imaging techniques and serum-based tests. However, no single test is currently available that is able to completely replace liver biopsy. This review focuses on approved commercial tools used to diagnose HCV infection and the recommended hepatic fibrosis staging tests. PMID:24707126

  4. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    PubMed

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. jFuzz: A Concolic Whitebox Fuzzer for Java

    NASA Technical Reports Server (NTRS)

    Jayaraman, Karthick; Harvison, David; Ganesh, Vijay; Kiezun, Adam

    2009-01-01

    We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented several well-known optimizations and name-independent caching, which aggressively normalizes the constraints to reduce the number of calls to the constraint solver. We present preliminary results due to the optimizations, and demonstrate the effectiveness of jFuzz in creating good test inputs. The source code of jFuzz is available as part of the NASA Java PathFinder. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder.

  6. Power generation using sugar cane bagasse: A heat recovery analysis

    NASA Astrophysics Data System (ADS)

    Seguro, Jean Vittorio

    The sugar industry is facing the need to improve its performance by increasing efficiency and developing profitable by-products. An important possibility is the production of electrical power for sale. Co-generation has been practiced in the sugar industry for a long time in a very inefficient way with the main purpose of getting rid of the bagasse. The goal of this research was to develop a software tool that could be used to improve the way that bagasse is used to generate power. Special focus was given to the heat recovery components of the co-generation plant (economizer, air pre-heater and bagasse dryer) to determine if one, or a combination, of them led to a more efficient co-generation cycle. An extensive review of the state of the art of power generation in the sugar industry was conducted and is summarized in this dissertation. Based on this models were developed. After testing the models and comparing the results with the data collected from the literature, a software application that integrated all these models was developed to simulate the complete co-generation plant. Seven different cycles, three different pressures, and sixty-eight distributions of the flue gas through the heat recovery components can be simulated. The software includes an economic analysis tool that can help the designer determine the economic feasibility of different options. Results from running the simulation are presented that demonstrate its effectiveness in evaluating and comparing the different heat recovery components and power generation cycles. These results indicate that the economizer is the most beneficial option for heat recovery and that the use of waste heat in a bagasse dryer is the least desirable option. Quantitative comparisons of several possible cycle options with the widely-used traditional back-pressure turbine cycle are given. These indicate that a double extraction condensing cycle is best for co-generation purposes. Power generation gains between 40 and 100% are predicted for some cycles with the addition of optimum heat recovery systems.

  7. Scientific Story Telling & Social Media The role of social media in effectively communicating science

    NASA Astrophysics Data System (ADS)

    Brinkhuis, D.; Peart, L.

    2012-12-01

    Scientific discourse generally takes place in appropriate journals, using the language and conventions of science. That's fine, as long as the discourse remains in scientific circles. It is only outside those circles that the rules and techniques of engaging social media tools gain importance. A young generation of scientists are eager to share their experiences by using social media, but is this effective? And how can we better integrate all outreach & media channels to engage general audiences? How can Facebook, Twitter, Skype and YouTube be used as synergy tools in scientific story telling? Case: during IODP Expedtion 342 (June-July 2012) onboard the scientific drillship JOIDES Resolution an onboard educator and videographer worked non-stop fort two months on an integrated outreach plan that tried and tested the limits of all social media tools available to interact with an international public while at sea. The results are spectacular!

  8. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    PubMed

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  9. Scale models: A proven cost-effective tool for outage planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, R.; Segroves, R.

    1995-03-01

    As generation costs for operating nuclear stations have risen, more nuclear utilities have initiated efforts to improve cost effectiveness. Nuclear plant owners are also being challenged with lower radiation exposure limits and new revised radiation protection related regulations (10 CFR 20), which places further stress on their budgets. As source term reduction activities continue to lower radiation fields, reducing the amount of time spent in radiation fields becomes one of the most cost-effective ways of reducing radiation exposure. An effective approach for minimizing time spent in radiation areas is to use a physical scale model for worker orientation planning andmore » monitoring maintenance, modifications, and outage activities. To meet the challenge of continued reduction in the annual cumulative radiation exposures, new cost-effective tools are required. One field-tested and proven tool is the physical scale model.« less

  10. Benefits of a Unified LaSRS++ Simulation for NAS-Wide and High-Fidelity Modeling

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia; Madden, Michael

    2014-01-01

    The LaSRS++ high-fidelity vehicle simulation was extended in 2012 to support a NAS-wide simulation mode. Since the initial proof-of-concept, the LaSRS++ NAS-wide simulation is maturing into a research-ready tool. A primary benefit of this new capability is the consolidation of the two modeling paradigms under a single framework to save cost, facilitate iterative concept testing between the two tools, and to promote communication and model sharing between user communities at Langley. Specific benefits of each type of modeling are discussed along with the expected benefits of the unified framework. Current capability details of the LaSRS++ NAS-wide simulations are provided, including the visualization tool, live data interface, trajectory generators, terminal routing for arrivals and departures, maneuvering, re-routing, navigation, winds, and turbulence. The plan for future development is also described.

  11. Absorbed Dose Determination Using Experimental and Analytical Predictions of X-Ray Spectra

    NASA Technical Reports Server (NTRS)

    Edwards, D. L.; Carruth, Ralph (Technical Monitor)

    2001-01-01

    Electron beam welding in a vacuum is a technology that NASA is investigating as a joining technique for manufacture of space structures. This investigation characterizes the x-ray environment due to operation of an in-vacuum electron beam welding tool and provides recommendations for adequate shielding for astronauts performing the in-vacuum electron beam welding. NASA, in a joint venture with the Russian Space Agency, was scheduled to perform a series of welding in space experiments on board the U.S. Space Shuttle. This series of experiments was named the international space welding experiment (ISWE). The hardware associated with the ISWE was leased to NASA by the Paton Welding Institute (PWI) in Ukraine for ground-based welding experiments in preparation for flight. Two ground tests were scheduled, using the ISWE electron beam welding tool, to characterize the radiation exposure to an astronaut during the operation of the ISWE. These radiation exposure tests used thermoluminescence dosimeters (TLD's) shielded with material currently used by astronauts during extravehicular activities to measure the radiation dose. The TLD's were exposed to x-ray radiation generated by operation of the ISWE in-vacuum electron beam welding tool. This investigation was the first known application of TLD's to measure absorbed dose from x rays of energy less than 10 keV. The ISWE hardware was returned to Ukraine before the issue of adequate shielding for the astronauts was completely verified. Therefore, alternate experimental and analytical methods were developed to measure and predict the x-ray spectral and intensity distribution generated by ISWE electron beam impact with metal. These x-ray spectra were normalized to an equivalent ISWE exposure, then used to calculate the absorbed radiation dose to astronauts. These absorbed dose values were compared to TLD measurements obtained during actual operation of the ISWE in-vacuum electron beam welding tool. The calculated absorbed dose values were found to be in agreement with the measured TLD values.

  12. Prediction and Measurement of X-Ray Spectral and Intensity Distributions from Low Energy Electron Impact Sources

    NASA Technical Reports Server (NTRS)

    Edwards, David L.

    1999-01-01

    In-vacuum electron beam welding is a technology that NASA considered as a joining technique for manufacture of space structures. The interaction of energetic electrons with metal produces x-rays. The radiation exposure to astronauts performing the in-vacuum electron beam welding must be characterized and minimized to insure safe operating conditions. This investigation characterized the x-ray environment due to operation of an in-vacuum electron beam welding tool. NASA, in a joint venture with the Russian Space Agency, was scheduled to perform a series of welding in space experiments on board the United States Space Shuttle. This series of experiments was named the International Space Welding Experiment (ISWE). The hardware associated with the ISWE was leased to NASA, by the Paton Welding Institute (PWI) in Ukraine, for ground based welding experiments in preparation for flight. Two tests were scheduled, using the ISWE electron beam welding tool, to characterize the radiation exposure to an astronaut during the operation of the ISWE. These radiation exposure tests consisted of Thermoluminescence Dosimeters (TLD's) shielded with material currently used by astronauts during Extra Vehicular Activities (EVA) and exposed to x-ray radiation generated by operation of an in-vacuum electron beam welding tool. This investigation was the first known application of TLD's to measure absorbed dose from x-rays of energy less than 10 KeV. The ISWE hardware was returned to Ukraine before the issue of adequate shielding for the astronauts was verified. Therefore, alternate experimental and analytical methods were developed to measure and predict the x-ray spectral and intensity distribution generated by electron impact with metal. These x-ray spectra were used to calculate the absorbed radiation dose to astronauts. These absorbed dose values were compared to TLD measurements obtained during actual operation of the in-vacuum electron beam welding tool. The calculated absorbed dose values were found to be in good agreement with the TLD values.

  13. Assessment of Near-Field Sonic Boom Simulation Tools

    NASA Technical Reports Server (NTRS)

    Casper, J. H.; Cliff, S. E.; Thomas, S. D.; Park, M. A.; McMullen, M. S.; Melton, J. E.; Durston, D. A.

    2008-01-01

    A recent study for the Supersonics Project, within the National Aeronautics and Space Administration, has been conducted to assess current in-house capabilities for the prediction of near-field sonic boom. Such capabilities are required to simulate the highly nonlinear flow near an aircraft, wherein a sonic-boom signature is generated. There are many available computational fluid dynamics codes that could be used to provide the near-field flow for a sonic boom calculation. However, such codes have typically been developed for applications involving aerodynamic configuration, for which an efficiently generated computational mesh is usually not optimum for a sonic boom prediction. Preliminary guidelines are suggested to characterize a state-of-the-art sonic boom prediction methodology. The available simulation tools that are best suited to incorporate into that methodology are identified; preliminary test cases are presented in support of the selection. During this phase of process definition and tool selection, parallel research was conducted in an attempt to establish criteria that link the properties of a computational mesh to the accuracy of a sonic boom prediction. Such properties include sufficient grid density near shocks and within the zone of influence, which are achieved by adaptation and mesh refinement strategies. Prediction accuracy is validated by comparison with wind tunnel data.

  14. SU-C-202-03: A Tool for Automatic Calculation of Delivered Dose Variation for Off-Line Adaptive Therapy Using Cone Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Lee, S; Chen, S

    Purpose: Monitoring the delivered dose is an important task for the adaptive radiotherapy (ART) and for determining time to re-plan. A software tool which enables automatic delivered dose calculation using cone-beam CT (CBCT) has been developed and tested. Methods: The tool consists of four components: a CBCT Colleting Module (CCM), a Plan Registration Moduel (PRM), a Dose Calculation Module (DCM), and an Evaluation and Action Module (EAM). The CCM is triggered periodically (e.g. every 1:00 AM) to search for newly acquired CBCTs of patients of interest and then export the DICOM files of the images and related registrations defined inmore » ARIA followed by triggering the PRM. The PRM imports the DICOM images and registrations, links the CBCTs to the related treatment plan of the patient in the planning system (RayStation V4.5, RaySearch, Stockholm, Sweden). A pre-determined CT-to-density table is automatically generated for dose calculation. Current version of the DCM uses a rigid registration which regards the treatment isocenter of the CBCT to be the isocenter of the treatment plan. Then it starts the dose calculation automatically. The AEM evaluates the plan using pre-determined plan evaluation parameters: PTV dose-volume metrics and critical organ doses. The tool has been tested for 10 patients. Results: Automatic plans are generated and saved in the order of the treatment dates of the Adaptive Planning module of the RayStation planning system, without any manual intervention. Once the CTV dose deviates more than 3%, both email and page alerts are sent to the physician and the physicist of the patient so that one can look the case closely. Conclusion: The tool is capable to perform automatic dose tracking and to alert clinicians when an action is needed. It is clinically useful for off-line adaptive therapy to catch any gross error. Practical way of determining alarming level for OAR is under development.« less

  15. Comparison of BrainTool to other UML modeling and model transformation tools

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  16. Emerging Concepts of Data Integration in Pathogen Phylodynamics.

    PubMed

    Baele, Guy; Suchard, Marc A; Rambaut, Andrew; Lemey, Philippe

    2017-01-01

    Phylodynamics has become an increasingly popular statistical framework to extract evolutionary and epidemiological information from pathogen genomes. By harnessing such information, epidemiologists aim to shed light on the spatio-temporal patterns of spread and to test hypotheses about the underlying interaction of evolutionary and ecological dynamics in pathogen populations. Although the field has witnessed a rich development of statistical inference tools with increasing levels of sophistication, these tools initially focused on sequences as their sole primary data source. Integrating various sources of information, however, promises to deliver more precise insights in infectious diseases and to increase opportunities for statistical hypothesis testing. Here, we review how the emerging concept of data integration is stimulating new advances in Bayesian evolutionary inference methodology which formalize a marriage of statistical thinking and evolutionary biology. These approaches include connecting sequence to trait evolution, such as for host, phenotypic and geographic sampling information, but also the incorporation of covariates of evolutionary and epidemic processes in the reconstruction procedures. We highlight how a full Bayesian approach to covariate modeling and testing can generate further insights into sequence evolution, trait evolution, and population dynamics in pathogen populations. Specific examples demonstrate how such approaches can be used to test the impact of host on rabies and HIV evolutionary rates, to identify the drivers of influenza dispersal as well as the determinants of rabies cross-species transmissions, and to quantify the evolutionary dynamics of influenza antigenicity. Finally, we briefly discuss how data integration is now also permeating through the inference of transmission dynamics, leading to novel insights into tree-generative processes and detailed reconstructions of transmission trees. [Bayesian inference; birth–death models; coalescent models; continuous trait evolution; covariates; data integration; discrete trait evolution; pathogen phylodynamics.

  17. The influence of control group reproduction on the statistical power of the Environmental Protection Agency's Medaka Extended One Generation Reproduction Test (MEOGRT).

    PubMed

    Flynn, Kevin; Swintek, Joe; Johnson, Rodney

    2017-02-01

    Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of medaka breeding pairs. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) would have on the statistical power of the test. The MEOGRT Reproduction Power Analysis Tool (MRPAT) is a software tool developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. Example scenarios are detailed that highlight the importance of the reproductive parameters on statistical power. When control fecundity is increased from 21 to 38 eggs per pair per day and the variance decreased from 49 to 20, the gain in power is equivalent to increasing replication by 2.5 times. On the other hand, if 10% of the breeding pairs, including controls, do not spawn, the power to detect a 40% decrease in fecundity drops to 0.54 from nearly 0.98 when all pairs have some level of egg production. Perhaps most importantly, MRPAT was used to inform the decision making process that lead to the final recommendation of the MEOGRT to have 24 control breeding pairs and 12 breeding pairs in each exposure group. Published by Elsevier Inc.

  18. Emerging Concepts of Data Integration in Pathogen Phylodynamics

    PubMed Central

    Baele, Guy; Suchard, Marc A.; Rambaut, Andrew; Lemey, Philippe

    2017-01-01

    Phylodynamics has become an increasingly popular statistical framework to extract evolutionary and epidemiological information from pathogen genomes. By harnessing such information, epidemiologists aim to shed light on the spatio-temporal patterns of spread and to test hypotheses about the underlying interaction of evolutionary and ecological dynamics in pathogen populations. Although the field has witnessed a rich development of statistical inference tools with increasing levels of sophistication, these tools initially focused on sequences as their sole primary data source. Integrating various sources of information, however, promises to deliver more precise insights in infectious diseases and to increase opportunities for statistical hypothesis testing. Here, we review how the emerging concept of data integration is stimulating new advances in Bayesian evolutionary inference methodology which formalize a marriage of statistical thinking and evolutionary biology. These approaches include connecting sequence to trait evolution, such as for host, phenotypic and geographic sampling information, but also the incorporation of covariates of evolutionary and epidemic processes in the reconstruction procedures. We highlight how a full Bayesian approach to covariate modeling and testing can generate further insights into sequence evolution, trait evolution, and population dynamics in pathogen populations. Specific examples demonstrate how such approaches can be used to test the impact of host on rabies and HIV evolutionary rates, to identify the drivers of influenza dispersal as well as the determinants of rabies cross-species transmissions, and to quantify the evolutionary dynamics of influenza antigenicity. Finally, we briefly discuss how data integration is now also permeating through the inference of transmission dynamics, leading to novel insights into tree-generative processes and detailed reconstructions of transmission trees. [Bayesian inference; birth–death models; coalescent models; continuous trait evolution; covariates; data integration; discrete trait evolution; pathogen phylodynamics. PMID:28173504

  19. Evaluation of Targeted Next-Generation Sequencing for Detection of Bovine Pathogens in Clinical Samples.

    PubMed

    Anis, Eman; Hawkins, Ian K; Ilha, Marcia R S; Woldemeskel, Moges W; Saliki, Jeremiah T; Wilkes, Rebecca P

    2018-07-01

    The laboratory diagnosis of infectious diseases, especially those caused by mixed infections, is challenging. Routinely, it requires submission of multiple samples to separate laboratories. Advances in next-generation sequencing (NGS) have provided the opportunity for development of a comprehensive method to identify infectious agents. This study describes the use of target-specific primers for PCR-mediated amplification with the NGS technology in which pathogen genomic regions of interest are enriched and selectively sequenced from clinical samples. In the study, 198 primers were designed to target 43 common bovine and small-ruminant bacterial, fungal, viral, and parasitic pathogens, and a bioinformatics tool was specifically constructed for the detection of targeted pathogens. The primers were confirmed to detect the intended pathogens by testing reference strains and isolates. The method was then validated using 60 clinical samples (including tissues, feces, and milk) that were also tested with other routine diagnostic techniques. The detection limits of the targeted NGS method were evaluated using 10 representative pathogens that were also tested by quantitative PCR (qPCR), and the NGS method was able to detect the organisms from samples with qPCR threshold cycle ( C T ) values in the 30s. The method was successful for the detection of multiple pathogens in the clinical samples, including some additional pathogens missed by the routine techniques because the specific tests needed for the particular organisms were not performed. The results demonstrate the feasibility of the approach and indicate that it is possible to incorporate NGS as a diagnostic tool in a cost-effective manner into a veterinary diagnostic laboratory. Copyright © 2018 Anis et al.

  20. ISHM Anomaly Lexicon for Rocket Test

    NASA Technical Reports Server (NTRS)

    Schmalzel, John L.; Buchanan, Aubri; Hensarling, Paula L.; Morris, Jonathan; Turowski, Mark; Figueroa, Jorge F.

    2007-01-01

    Integrated Systems Health Management (ISHM) is a comprehensive capability. An ISHM system must detect anomalies, identify causes of such anomalies, predict future anomalies, help identify consequences of anomalies for example, suggested mitigation steps. The system should also provide users with appropriate navigation tools to facilitate the flow of information into and out of the ISHM system. Central to the ability of the ISHM to detect anomalies is a clearly defined catalog of anomalies. Further, this lexicon of anomalies must be organized in ways that make it accessible to a suite of tools used to manage the data, information and knowledge (DIaK) associated with a system. In particular, it is critical to ensure that there is optimal mapping between target anomalies and the algorithms associated with their detection. During the early development of our ISHM architecture and approach, it became clear that a lexicon of anomalies would be important to the development of critical anomaly detection algorithms. In our work in the rocket engine test environment at John C. Stennis Space Center, we have access to a repository of discrepancy reports (DRs) that are generated in response to squawks identified during post-test data analysis. The DR is the tool used to document anomalies and the methods used to resolve the issue. These DRs have been generated for many different tests and for all test stands. The result is that they represent a comprehensive summary of the anomalies associated with rocket engine testing. Fig. 1 illustrates some of the data that can be extracted from a DR. Such information includes affected transducer channels, narrative description of the observed anomaly, and the steps used to correct the problem. The primary goal of the anomaly lexicon development efforts we have undertaken is to create a lexicon that could be used in support of an associated health assessment database system (HADS) co-development effort. There are a number of significant byproducts of the anomaly lexicon compilation effort. For example, (1) Allows determination of the frequency distribution of anomalies to help identify those with the potential for high return on investment if included in automated detection as part of an ISHM system, (2) Availability of a regular lexicon could provide the base anomaly name choices to help maintain consistency in the DR collection process, and (3) Although developed for the rocket engine test environment, most of the anomalies are not specific to rocket testing, and thus can be reused in other applications.

  1. Non-competitive inhibition by active site binders.

    PubMed

    Blat, Yuval

    2010-06-01

    Classical enzymology has been used for generations to understand the interactions of inhibitors with their enzyme targets. Enzymology tools enabled prediction of the biological impact of inhibitors as well as the development of novel, more potent, ones. Experiments designed to examine the competition between the tested inhibitor and the enzyme substrate(s) are the tool of choice to identify inhibitors that bind in the active site. Competition between an inhibitor and a substrate is considered a strong evidence for binding of the inhibitor in the active site, while the lack of competition suggests binding to an alternative site. Nevertheless, exceptions to this notion do exist. Active site-binding inhibitors can display non-competitive inhibition patterns. This unusual behavior has been observed with enzymes utilizing an exosite for substrate binding, isomechanism enzymes, enzymes with multiple substrates and/or products and two-step binding inhibitors. In many of these cases, the mechanisms underlying the lack of competition between the substrate and the inhibitor are well understood. Tools like alternative substrates, testing the enzyme reaction in the reverse direction and monitoring inhibition time dependence can be applied to enable distinction between 'badly behaving' active site binders and true exosite inhibitors.

  2. Toward a Real-Time Measurement-Based System for Estimation of Helicopter Engine Degradation Due to Compressor Erosion

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Simo, Donald L.

    2007-01-01

    This paper presents a preliminary demonstration of an automated health assessment tool, capable of real-time on-board operation using existing engine control hardware. The tool allows operators to discern how rapidly individual turboshaft engines are degrading. As the compressor erodes, performance is lost, and with it the ability to generate power. Thus, such a tool would provide an instant assessment of the engine s fitness to perform a mission, and would help to pinpoint any abnormal wear or performance anomalies before they became serious, thereby decreasing uncertainty and enabling improved maintenance scheduling. The research described in the paper utilized test stand data from a T700-GE-401 turboshaft engine that underwent sand-ingestion testing to scale a model-based compressor efficiency degradation estimation algorithm. This algorithm was then applied to real-time Health Usage and Monitoring System (HUMS) data from a T700-GE-701C to track compressor efficiency on-line. The approach uses an optimal estimator called a Kalman filter. The filter is designed to estimate the compressor efficiency using only data from the engine s sensors as input.

  3. Mirage: a visible signature evaluation tool

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.; Meehan, Alaster J.; Shao, Q. T.; Richards, Noel

    2017-10-01

    This paper presents the Mirage visible signature evaluation tool, designed to provide a visible signature evaluation capability that will appropriately reflect the effect of scene content on the detectability of targets, providing a capability to assess visible signatures in the context of the environment. Mirage is based on a parametric evaluation of input images, assessing the value of a range of image metrics and combining them using the boosted decision tree machine learning method to produce target detectability estimates. It has been developed using experimental data from photosimulation experiments, where human observers search for vehicle targets in a variety of digital images. The images used for tool development are synthetic (computer generated) images, showing vehicles in many different scenes and exhibiting a wide variation in scene content. A preliminary validation has been performed using k-fold cross validation, where 90% of the image data set was used for training and 10% of the image data set was used for testing. The results of the k-fold validation from 200 independent tests show a prediction accuracy between Mirage predictions of detection probability and observed probability of detection of r(262) = 0:63, p < 0:0001 (Pearson correlation) and a MAE = 0:21 (mean absolute error).

  4. The generation of criteria for selecting analytical tools for landscape management

    Treesearch

    Marilyn Duffey-Armstrong

    1979-01-01

    This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...

  5. Self-optimizing Monte Carlo method for nuclear well logging simulation

    NASA Astrophysics Data System (ADS)

    Liu, Lianyan

    1997-09-01

    In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very good initial importance map is always available after the first point has been calculated, high computing efficiency is maintained. The availability of contributon maps provides an easy way of understanding the logging measurement and analyzing for the depth of investigation.

  6. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  7. The development and pilot testing of a rapid assessment tool to improve local public health system capacity in Australia.

    PubMed

    Bagley, Prue; Lin, Vivian

    2009-11-15

    To operate effectively the public health system requires infrastructure and the capacity to act. Public health's ability to attract funding for infrastructure and capacity development would be enhanced if it was able to demonstrate what level of capacity was required to ensure a high performing system. Australia's public health activities are undertaken within a complex organizational framework that involves three levels of government and a diverse range of other organizations. The question of appropriate levels of infrastructure and capacity is critical at each level. Comparatively little is known about infrastructure and capacity at the local level. In-depth interviews were conducted with senior managers in two Australian states with different frameworks for health administration. They were asked to reflect on the critical components of infrastructure and capacity required at the local level. The interviews were analyzed to identify the major themes. Workshops with public health experts explored this data further. The information generated was used to develop a tool, designed to be used by groups of organizations within discrete geographical locations to assess local public health capacity. Local actors in these two different systems pointed to similar areas for inclusion for the development of an instrument to map public health capacity at the local level. The tool asks respondents to consider resources, programs and the cultural environment within their organization. It also asks about the policy environment - recognizing that the broader environment within which organizations operate impacts on their capacity to act. Pilot testing of the tool pointed to some of the challenges involved in such an exercise, particularly if the tool were to be adopted as policy. This research indicates that it is possible to develop a tool for the systematic assessment of public health capacity at the local level. Piloting the tool revealed some concerns amongst participants, particularly about how the tool would be used. However there was also recognition that the areas covered by the tool were those considered relevant.

  8. Guiding principles for the implementation of non-animal safety assessment approaches for cosmetics: skin sensitisation.

    PubMed

    Goebel, Carsten; Aeby, Pierre; Ade, Nadège; Alépée, Nathalie; Aptula, Aynur; Araki, Daisuke; Dufour, Eric; Gilmour, Nicola; Hibatallah, Jalila; Keller, Detlef; Kern, Petra; Kirst, Annette; Marrec-Fairley, Monique; Maxwell, Gavin; Rowland, Joanna; Safford, Bob; Schellauf, Florian; Schepky, Andreas; Seaman, Chris; Teichert, Thomas; Tessier, Nicolas; Teissier, Silvia; Weltzien, Hans Ulrich; Winkler, Petra; Scheel, Julia

    2012-06-01

    Characterisation of skin sensitisation potential is a key endpoint for the safety assessment of cosmetic ingredients especially when significant dermal exposure to an ingredient is expected. At present the mouse local lymph node assay (LLNA) remains the 'gold standard' test method for this purpose however non-animal test methods are under development that aim to replace the need for new animal test data. COLIPA (the European Cosmetics Association) funds an extensive programme of skin sensitisation research, method development and method evaluation and helped coordinate the early evaluation of the three test methods currently undergoing pre-validation. In May 2010, a COLIPA scientific meeting was held to analyse to what extent skin sensitisation safety assessments for cosmetic ingredients can be made in the absence of animal data. In order to propose guiding principles for the application and further development of non-animal safety assessment strategies it was evaluated how and when non-animal test methods, predictions based on physico-chemical properties (including in silico tools), threshold concepts and weight-of-evidence based hazard characterisation could be used to enable safety decisions. Generation and assessment of potency information from alternative tools which at present is predominantly derived from the LLNA is considered the future key research area. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karp, Peter D.

    Pathway Tools is a systems-biology software package written by SRI International (SRI) that produces Pathway/Genome Databases (PGDBs) for organisms with a sequenced genome. Pathway Tools also provides a wide range of capabilities for analyzing predicted metabolic networks and user-generated omics data. More than 5,000 academic, industrial, and government groups have licensed Pathway Tools. This user community includes researchers at all three DOE bioenergy centers, as well as academic and industrial metabolic engineering (ME) groups. An integral part of the Pathway Tools software is MetaCyc, a large, multiorganism database of metabolic pathways and enzymes that SRI and its academic collaborators manuallymore » curate. This project included two main goals: I. Enhance the MetaCyc content of bioenergy-related enzymes and pathways. II. Develop computational tools for engineering metabolic pathways that satisfy specified design goals, in particular for bioenergy-related pathways. In part I, SRI proposed to significantly expand the coverage of bioenergy-related metabolic information in MetaCyc, followed by the generation of organism-specific PGDBs for all energy-relevant organisms sequenced at the DOE Joint Genome Institute (JGI). Part I objectives included: 1: Expand the content of MetaCyc to include bioenergy-related enzymes and pathways. 2: Enhance the Pathway Tools software to enable display of complex polymer degradation processes. 3: Create new PGDBs for the energy-related organisms sequenced by JGI, update existing PGDBs with new MetaCyc content, and make these data available to JBEI via the BioCyc website. In part II, SRI proposed to develop an efficient computational tool for the engineering of metabolic pathways. Part II objectives included: 4: Develop computational tools for generating metabolic pathways that satisfy specified design goals, enabling users to specify parameters such as starting and ending compounds, and preferred or disallowed intermediate compounds. The pathways were to be generated using metabolic reactions from a reference database (DB). 5: Develop computational tools for ranking the pathways generated in objective (4) according to their optimality. The ranking criteria include stoichiometric yield, the number and cost of additional inputs and the cofactor compounds required by the pathway, pathway length, and pathway energetics. 6: Develop tools for visualizing generated pathways to facilitate the evaluation of a large space of generated pathways.« less

  10. Using habitat suitability models to target invasive plant species surveys

    USGS Publications Warehouse

    Crall, Alycia W.; Jarnevich, Catherine S.; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey

    2013-01-01

    Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P 2) = 47.42, P < 0.01). From these findings, we conclude that habitat suitability models can be highly useful tools for guiding invasive species monitoring, and we support the use of an iterative sampling design for guiding such efforts.

  11. Using habitat suitability models to target invasive plant species surveys.

    PubMed

    Crall, Alycia W; Jarnevich, Catherine S; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey

    2013-01-01

    Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P < 0.01), and targeted sampling did detect more species than nontargeted sampling with less sampling effort (chi2 = 47.42, P < 0.01). From these findings, we conclude that habitat suitability models can be highly useful tools for guiding invasive species monitoring, and we support the use of an iterative sampling design for guiding such efforts.

  12. Cyclostationarity approach for monitoring chatter and tool wear in high speed milling

    NASA Astrophysics Data System (ADS)

    Lamraoui, M.; Thomas, M.; El Badaoui, M.

    2014-02-01

    Detection of chatter and tool wear is crucial in the machining process and their monitoring is a key issue, for: (1) insuring better surface quality, (2) increasing productivity and (3) protecting both machines and safe workpiece. This paper presents an investigation of chatter and tool wear using the cyclostationary method to process the vibrations signals acquired from high speed milling. Experimental cutting tests were achieved on slot milling operation of aluminum alloy. The experimental set-up is designed for acquisition of accelerometer signals and encoding information picked up from an encoder. The encoder signal is used for re-sampling accelerometers signals in angular domain using a specific algorithm that was developed in LASPI laboratory. The use of cyclostationary on accelerometer signals has been applied for monitoring chatter and tool wear in high speed milling. The cyclostationarity appears on average properties (first order) of signals, on the energetic properties (second order) and it generates spectral lines at cyclic frequencies in spectral correlation. Angular power and kurtosis are used to analyze chatter phenomena. The formation of chatter is characterized by unstable, chaotic motion of the tool and strong anomalous fluctuations of cutting forces. Results show that stable machining generates only very few cyclostationary components of second order while chatter is strongly correlated to cyclostationary components of second order. By machining in the unstable region, chatter results in flat angular kurtosis and flat angular power, such as a pseudo (white) random signal with flat spectrum. Results reveal that spectral correlation and Wigner Ville spectrum or integrated Wigner Ville issued from second-order cyclostationary are an efficient parameter for the early diagnosis of faults in high speed machining, such as chatter, tool wear and bearings, compared to traditional stationary methods. Wigner Ville representation of the residual signal shows that the energy corresponding to the tooth passing decreases when chatter phenomenon occurs. The effect of the tool wear and the number of broken teeth on the excitation of structure resonances appears in Wigner Ville presentation.

  13. Development of an Online Toolkit for Measuring Performance in Health Emergency Response Exercises.

    PubMed

    Agboola, Foluso; Bernard, Dorothy; Savoia, Elena; Biddinger, Paul D

    2015-10-01

    Exercises that simulate emergency scenarios are accepted widely as an essential component of a robust Emergency Preparedness program. Unfortunately, the variability in the quality of the exercises conducted, and the lack of standardized processes to measure performance, has limited the value of exercises in measuring preparedness. In order to help health organizations improve the quality and standardization of the performance data they collect during simulated emergencies, a model online exercise evaluation toolkit was developed using performance measures tested in over 60 Emergency Preparedness exercises. The exercise evaluation toolkit contains three major components: (1) a database of measures that can be used to assess performance during an emergency response exercise; (2) a standardized data collection tool (form); and (3) a program that populates the data collection tool with the measures that have been selected by the user from the database. The evaluation toolkit was pilot tested from January through September 2014 in collaboration with 14 partnering organizations representing 10 public health agencies and four health care agencies from eight states across the US. Exercise planners from the partnering organizations were asked to use the toolkit for their exercise evaluation process and were interviewed to provide feedback on the use of the toolkit, the generated evaluation tool, and the usefulness of the data being gathered for the development of the exercise after-action report. Ninety-three percent (93%) of exercise planners reported that they found the online database of performance measures appropriate for the creation of exercise evaluation forms, and they stated that they would use it again for future exercises. Seventy-two percent (72%) liked the exercise evaluation form that was generated from the toolkit, and 93% reported that the data collected by the use of the evaluation form were useful in gauging their organization's performance during the exercise. Seventy-nine percent (79%) of exercise planners preferred the evaluation form generated by the toolkit to other forms of evaluations. Results of this project show that users found the newly developed toolkit to be user friendly and more relevant to measurement of specific public health and health care capabilities than other tools currently available. The developed toolkit may contribute to the further advancement of developing a valid approach to exercise performance measurement.

  14. POST II Trajectory Animation Tool Using MATLAB, V1.0

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad

    2005-01-01

    A trajectory animation tool has been developed for accurately depicting position and the attitude of the bodies in flight. The movies generated from This MATLAB based tool serve as an engineering analysis aid to gain further understanding into the dynamic behavior of bodies in flight. This tool has been designed to interface with the output generated from POST II simulations, and is able to animate a single as well as multiple vehicles in flight.

  15. QSARpy: A new flexible algorithm to generate QSAR models based on dissimilarities. The log Kow case study.

    PubMed

    Ferrari, Thomas; Lombardo, Anna; Benfenati, Emilio

    2018-05-14

    Several methods exist to develop QSAR models automatically. Some are based on indices of the presence of atoms, other on the most similar compounds, other on molecular descriptors. Here we introduce QSARpy v1.0, a new QSAR modeling tool based on a different approach: the dissimilarity. This tool fragments the molecules of the training set to extract fragments that can be associated to a difference in the property/activity value, called modulators. If the target molecule share part of the structure with a molecule of the training set and differences can be explained with one or more modulators, the property/activity value of the molecule of the training set is adjusted using the value associated to the modulator(s). This tool is tested here on the n-octanol/water partition coefficient (Kow, usually expressed in logarithmic units as log Kow). It is a key parameter in risk assessment since it is a measure of hydrophobicity. Its wide spread use makes these estimation methods very useful to reduce testing costs. Using QSARpy v1.0, we obtained a new model to predict log Kow with accurate performance (RMSE 0.43 and R 2 0.94 for the external test set), comparing favorably with other programs. QSARpy is freely available on request. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Laboratory information management system: an example of international cooperation in Namibia.

    PubMed

    Colangeli, Patrizia; Ferrilli, Monica; Quaranta, Fabrizio; Malizia, Elio; Mbulu, Rosa-Stella; Mukete, Esther; Iipumbu, Lukas; Kamhulu, Anna; Tjipura-Zaire, Georgina; Di Francesco, Cesare; Lelli, Rossella; Scacchia, Massimo

    2012-01-01

    The authors describe the project undertaken by the Istituto G. Caporale to provide a laboratory information management system (LIMS) to the Central Veterinary Laboratory (CVL) in Windhoek, Namibia. This robust laboratory management tool satisfies Namibia's information obligations under international quality standard ISO 17025:2005. The Laboratory Information Management System (LIMS) for Africa was designed to collect and manage all necessary information on samples, tests and test results. The system involves the entry of sample data on arrival, as required by Namibian sampling plans, the tracking of samples through the various sections of the CVL, the collection of test results, generation of test reports and monitoring of outbreaks through data interrogation functions, eliminating multiple registrations of the same data on paper records. It is a fundamental component of the Namibian veterinary information system.

  17. Use of information and communication technologies for teaching physics at the Technical University

    NASA Astrophysics Data System (ADS)

    Polezhaev, V. D.; Polezhaeva, L. N.; Kamenev, V. V.

    2017-01-01

    The paper discusses the ways to improve methods and algorithms of the automated control of knowledge, approaches to the establishment and effective functioning of electronic teaching complexes, which include tests of a new generation, and their use is not limited control purpose only. Possibilities of computer-based testing system SCIENTIA are presented. This system is a tool to automate the control of knowledge that can be used for the assessment and monitoring of students' knowledge in different types of exams, self-control of students' knowledge, making test materials, creating a unified database of tests on a wide range of subjects etc. Successful operation of informational system is confirmed in practice during the study of the course of physics by students at Technical University.

  18. The abridged patient-generated subjective global assessment is a useful tool for early detection and characterization of cancer cachexia.

    PubMed

    Vigano, Antonio L; di Tomasso, Jonathan; Kilgour, Robert D; Trutschnigg, Barbara; Lucar, Enriqueta; Morais, José A; Borod, Manuel

    2014-07-01

    Cancer cachexia (CC) is a syndrome characterized by wasting of lean body mass and fat, often driven by decreased food intake, hypermetabolism, and inflammation resulting in decreased lifespan and quality of life. Classification of cancer cachexia has improved, but few clinically relevant diagnostic tools exist for its early identification and characterization. The abridged Patient-Generated Subjective Global Assessment (aPG-SGA) is a modification of the original Patient-Generated Subjective Global Assessment, and consists of a four-part questionnaire that scores patients' weight history, food intake, appetite, and performance status. The purpose of this study was to determine whether the aPG-SGA is associated with both features and clinical sequelae of cancer cachexia. In this prospective cohort study, 207 advanced lung and gastrointestinal cancer patients completed the following tests: aPG-SGA, Edmonton Symptom Assessment System, handgrip strength, a complete blood count, albumin, apolipoprotein A and B, and C-reactive protein. Ninety-four participants with good performance status as assessed by the Eastern Cooperative Oncology Group Performance Status completed additional questionnaires and underwent body composition testing. Of these, 68 patients tested for quadriceps strength and completed a 3-day food recall. Multivariable regression models revealed that higher aPG-SGA scores (≥9 vs 0 to 1) are significantly associated (P<0.05) with the following: unfavorable biological markers of cancer cachexia, such as higher white blood cell counts (10.0 vs 6.7×10(9)/L; lower hemoglobin (115.6 vs 127.7 g/L), elevated C-reactive protein (42.7 vs 18.2 mg/L [406.7 vs 173.3 nmol/L]); decreased anthropometric and physical measures, such as body mass index (22.5 vs 27.1); fat mass (14.4 vs 26.0 kg), handgrip (24.7 vs 34.9 kg) and leg strength; an average 12% greater length of hospital stay; a dose reduction in chemotherapy; and increased mortality. Given its association with the main features of cancer cachexia and its ease of use, the aPG-SGA appears to be a useful tool for detecting and predicting outcomes of cancer cachexia. Additional research is required to determine what impact the aPG-SGA has on quality of care when used in the clinical setting. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  19. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  20. FELIX-1.0: A finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    NASA Astrophysics Data System (ADS)

    Regnier, D.; Verrière, M.; Dubray, N.; Schunck, N.

    2016-03-01

    We describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in N-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank-Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.

  1. Comparison of de novo assembly statistics of Cucumis sativus L.

    NASA Astrophysics Data System (ADS)

    Wojcieszek, Michał; Kuśmirek, Wiktor; Pawełkowicz, Magdalena; PlÄ der, Wojciech; Nowak, Robert M.

    2017-08-01

    Genome sequencing is the core of genomic research. With the development of NGS and lowering the cost of procedure there is another tight gap - genome assembly. Developing the proper tool for this task is essential as quality of genome has important impact on further research. Here we present comparison of several de Bruijn assemblers tested on C. sativus genomic reads. The assessment shows that newly developed software - dnaasm provides better results in terms of quantity and quality. The number of generated sequences is lower by 5 - 33% with even two fold higher N50. Quality check showed reliable results were generated by dnaasm. This provides us with very strong base for future genomic analysis.

  2. A hybrid Dantzig-Wolfe, Benders decomposition and column generation procedure for multiple diet production planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Udomsungworagul, A.; Charnsethikul, P.

    2018-03-01

    This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.

  3. Manufacturing Cell Therapies Using Engineered Biomaterials.

    PubMed

    Abdeen, Amr A; Saha, Krishanu

    2017-10-01

    Emerging manufacturing processes to generate regenerative advanced therapies can involve extensive genomic and/or epigenomic manipulation of autologous or allogeneic cells. These cell engineering processes need to be carefully controlled and standardized to maximize safety and efficacy in clinical trials. Engineered biomaterials with smart and tunable properties offer an intriguing tool to provide or deliver cues to retain stemness, direct differentiation, promote reprogramming, manipulate the genome, or select functional phenotypes. This review discusses the use of engineered biomaterials to control human cell manufacturing. Future work exploiting engineered biomaterials has the potential to generate manufacturing processes that produce standardized cells with well-defined critical quality attributes appropriate for clinical testing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Design and Operation of the Synthesis Gas Generator System for Reformed Propane and Glycerin Combustion

    NASA Astrophysics Data System (ADS)

    Pickett, Derek Kyle

    Due to an increased interest in sustainable energy, biodiesel has become much more widely used in the last several years. Glycerin, one major waste component in biodiesel production, can be converted into a hydrogen rich synthesis gas to be used in an engine generator to recover energy from the biodiesel production process. This thesis contains information detailing the production, testing, and analysis of a unique synthesis generator rig at the University of Kansas. Chapter 2 gives a complete background of all major components, as well as how they are operated. In addition to component descriptions, methods for operating the system on pure propane, reformed propane, reformed glycerin along with the methodology of data acquisition is described. This chapter will serve as a complete operating manual for future students to continue research on the project. Chapter 3 details the literature review that was completed to better understand fuel reforming of propane and glycerin. This chapter also describes the numerical model produced to estimate the species produced during reformation activities. The model was applied to propane reformation in a proof of concept and calibration test before moving to glycerin reformation and its subsequent combustion. Chapter 4 first describes the efforts to apply the numerical model to glycerin using the calibration tools from propane reformation. It then discusses catalytic material preparation and glycerin reformation tests. Gas chromatography analysis of the reformer effluent was completed to compare to theoretical values from the numerical model. Finally, combustion of reformed glycerin was completed for power generation. Tests were completed to compare emissions from syngas combustion and propane combustion.

  5. Influence of the axial rotation angle on tool mark striations.

    PubMed

    Garcia, Derrel Louis; Pieterman, René; Baiker, Martin

    2017-10-01

    A tool's axial rotation influences the geometric properties of a tool mark. The larger the axial rotation angle, the larger the compression of structural details like striations. This complicates comparing tool marks at different axial rotations. Using chisels, tool marks were made from 0° to 75° axial rotation and compared using an automated approach Baiker et al. [10]. In addition, a 3D topographic surface of a chisel was obtained to generate virtual tool marks and to test whether the axial rotation angle of a mark could be predicted. After examination of the tool mark and chisel data-sets it was observed that marks lose information with increasing rotation due to the change in relative distance between geometrical details on the tool and the disappearance of smaller details. The similarity and repeatability were high for comparisons between marks with no difference in axial rotation, but decreasing with increased rotation angle from 0° to 75°. With an increasing difference in the rotation angles, the tool marks had to be corrected to account for the different compression factors between them. For compression up to 7.5%, this was obtained automatically by the tool mark alignment method. For larger compression, manually re-sizing the marks to the uncompressed widths at 0° rotation before the alignment was found suitable for successfully comparing even large differences in axial rotation. The similarity and repeatability were decreasing however, with increasing degree of re-sizing. The quality was assessed by determining the similarity at different detail levels within a tool mark. With an axial rotation up to 75°, tool marks were found to reliably represent structural details down to 100μm. The similarity of structural details below 100μm was dependent on the angle, with the highest similarity at small rotation angles and the lowest similarity at large rotation angles. Filtering to remove the details below 100μm lead to consistently higher similarity between tool marks at all angles and allowed for a comparison of marks up to 75° axial rotation. Finally, generated virtual tool mark profiles with an axial rotation were compared to experimental tool marks. The similarity between virtual and experimental tool marks remained high up to 60° rotation after which it decreased due to the loss in quality in both marks. Predicting the rotation angle is possible under certain conditions up to 45° rotation with an accuracy of 2.667±0.577° rotation. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Understanding Preferences for Treatment After Hypothetical First-Time Anterior Shoulder Dislocation: Surveying an Online Panel Utilizing a Novel Shared Decision-Making Tool.

    PubMed

    Streufert, Ben; Reed, Shelby D; Orlando, Lori A; Taylor, Dean C; Huber, Joel C; Mather, Richard C

    2017-03-01

    Although surgical management of a first-time anterior shoulder dislocation (FTASD) can reduce the risk of recurrent dislocation, other treatment characteristics, costs, and outcomes are important to patients considering treatment options. While patient preferences, such as those elicited by conjoint analysis, have been shown to be important in medical decision-making, the magnitudes or effects of patient preferences in treating an FTASD are unknown. To test a novel shared decision-making tool after sustained FTASD. Specifically measured were the following: (1) importance of aspects of operative versus nonoperative treatment, (2) respondents' agreement with results generated by the tool, (3) willingness to share these results with physicians, and (4) association of results with choice of treatment after FTASD. Cross-sectional study; Level of evidence, 3. A tool was designed and tested using members of Amazon Mechanical Turk, an online panel. The tool included an adaptive conjoint analysis exercise, a method to understand individuals' perceived importance of the following attributes of treatment: (1) chance of recurrent dislocation, (2) cost, (3) short-term limits on shoulder motion, (4) limits on participation in high-risk activities, and (5) duration of physical therapy. Respondents then chose between operative and nonoperative treatment for hypothetical shoulder dislocation. Overall, 374 of 501 (75%) respondents met the inclusion criteria, of which most were young, active males; one-third reported prior dislocation. From the conjoint analysis, the importance of recurrent dislocation and cost of treatment were the most important attributes. A substantial majority agreed with the tool's ability to generate representative preferences and indicated that they would share these preferences with their physician. Importance of recurrence proved significantly predictive of respondents' treatment choices, independent of sex or age; however, activity level was important to previous dislocators. A total of 125 (55%) males and 33 (23%) females chose surgery after FTASD, as did 37% of previous dislocators compared with 45% of nondislocators. When given thorough information about the risks and benefits, respondents had strong preferences for operative treatment after an FTASD. Respondents agreed with the survey results and wanted to share the information with providers. Recurrence was the most important attribute and played a role in decisions about treatment.

  7. Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties.

    PubMed

    Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean

    2010-11-01

    Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.

  8. Quokka: a comprehensive tool for rapid and accurate prediction of kinase family-specific phosphorylation sites in the human proteome.

    PubMed

    Li, Fuyi; Li, Chen; Marquez-Lago, Tatiana T; Leier, André; Akutsu, Tatsuya; Purcell, Anthony W; Smith, A Ian; Lithgow, Trevor; Daly, Roger J; Song, Jiangning; Chou, Kuo-Chen

    2018-06-27

    Kinase-regulated phosphorylation is a ubiquitous type of post-translational modification (PTM) in both eukaryotic and prokaryotic cells. Phosphorylation plays fundamental roles in many signalling pathways and biological processes, such as protein degradation and protein-protein interactions. Experimental studies have revealed that signalling defects caused by aberrant phosphorylation are highly associated with a variety of human diseases, especially cancers. In light of this, a number of computational methods aiming to accurately predict protein kinase family-specific or kinase-specific phosphorylation sites have been established, thereby facilitating phosphoproteomic data analysis. In this work, we present Quokka, a novel bioinformatics tool that allows users to rapidly and accurately identify human kinase family-regulated phosphorylation sites. Quokka was developed by using a variety of sequence scoring functions combined with an optimized logistic regression algorithm. We evaluated Quokka based on well-prepared up-to-date benchmark and independent test datasets, curated from the Phospho.ELM and UniProt databases, respectively. The independent test demonstrates that Quokka improves the prediction performance compared with state-of-the-art computational tools for phosphorylation prediction. In summary, our tool provides users with high-quality predicted human phosphorylation sites for hypothesis generation and biological validation. The Quokka webserver and datasets are freely available at http://quokka.erc.monash.edu/. Supplementary data are available at Bioinformatics online.

  9. Advanced Free Flight Planner and Dispatcher's Workstation: Preliminary Design Specification

    NASA Technical Reports Server (NTRS)

    Wilson, J.; Wright, C.; Couluris, G. J.

    1997-01-01

    The National Aeronautics and Space Administration (NASA) has implemented the Advanced Air Transportation Technology (AATT) program to investigate future improvements to the national and international air traffic management systems. This research, as part of the AATT program, developed preliminary design requirements for an advanced Airline Operations Control (AOC) dispatcher's workstation, with emphasis on flight planning. This design will support the implementation of an experimental workstation in NASA laboratories that would emulate AOC dispatch operations. The work developed an airline flight plan data base and specified requirements for: a computer tool for generation and evaluation of free flight, user preferred trajectories (UPT); the kernel of an advanced flight planning system to be incorporated into the UPT-generation tool; and an AOC workstation to house the UPT-generation tool and to provide a real-time testing environment. A prototype for the advanced flight plan optimization kernel was developed and demonstrated. The flight planner uses dynamic programming to search a four-dimensional wind and temperature grid to identify the optimal route, altitude and speed for successive segments of a flight. An iterative process is employed in which a series of trajectories are successively refined until the LTPT is identified. The flight planner is designed to function in the current operational environment as well as in free flight. The free flight environment would enable greater flexibility in UPT selection based on alleviation of current procedural constraints. The prototype also takes advantage of advanced computer processing capabilities to implement more powerful optimization routines than would be possible with older computer systems.

  10. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.

  11. Design Improvements and Analysis of Innovative High-Level Waste Pipeline Unplugging Technologies - 12171

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pribanic, Tomas; Awwad, Amer; Crespo, Jairo

    2012-07-01

    Transferring high-level waste (HLW) between storage tanks or to treatment facilities is a common practice performed at the Department of Energy (DoE) sites. Changes in the chemical and/or physical properties of the HLW slurry during the transfer process may lead to the formation of blockages inside the pipelines resulting in schedule delays and increased costs. To improve DoE's capabilities in the event of a pipeline plugging incident, FIU has continued to develop two novel unplugging technologies: an asynchronous pulsing system and a peristaltic crawler. The asynchronous pulsing system uses a hydraulic pulse generator to create pressure disturbances at two oppositemore » inlet locations of the pipeline to dislodge blockages by attacking the plug from both sides remotely. The peristaltic crawler is a pneumatic/hydraulic operated crawler that propels itself by a sequence of pressurization/depressurization of cavities (inner tubes). The crawler includes a frontal attachment that has a hydraulically powered unplugging tool. In this paper, details of the asynchronous pulsing system's ability to unplug a pipeline on a small-scale test-bed and results from the experimental testing of the second generation peristaltic crawler are provided. The paper concludes with future improvements for the third generation crawler and a recommended path forward for the asynchronous pulsing testing. (authors)« less

  12. Generative Text Sets: Tools for Negotiating Critically Inclusive Early Childhood Teacher Education Pedagogical Practices

    ERIC Educational Resources Information Center

    Souto-Manning, Mariana

    2017-01-01

    Through a case study, this article sheds light onto generative text sets as tools for developing and enacting critically inclusive early childhood teacher education pedagogies. In doing so, it positions teaching and learning processes as sociocultural, historical, and political acts as it inquires into the use of generative text sets in one early…

  13. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  14. New Techniques for the Generation and Analysis of Tailored Microbial Systems on Surfaces.

    PubMed

    Furst, Ariel L; Smith, Matthew J; Francis, Matthew B

    2018-05-17

    The interactions between microbes and surfaces provide critically important cues that control the behavior and growth of the cells. As our understanding of complex microbial communities improves, there is a growing need for experimental tools that can establish and control the spatial arrangements of these cells in a range of contexts. Recent improvements in methods to attach bacteria and yeast to nonbiological substrates, combined with an expanding set of techniques available to study these cells, position this field for many new discoveries. Improving methods for controlling the immobilization of bacteria provides powerful experimental tools for testing hypotheses regarding microbiome interactions, studying the transfer of nutrients between bacterial species, and developing microbial communities for green energy production and pollution remediation.

  15. Sustainability Tools Inventory - Initial Gaps Analysis | Science ...

    EPA Pesticide Factsheets

    This report identifies a suite of tools that address a comprehensive set of community sustainability concerns. The objective is to discover whether "gaps" exist in the tool suite’s analytic capabilities. These tools address activities that significantly influence resource consumption, waste generation, and hazard generation including air pollution and greenhouse gases. In addition, the tools have been evaluated using four screening criteria: relevance to community decision making, tools in an appropriate developmental stage, tools that may be transferrable to situations useful for communities, and tools with requiring skill levels appropriate to communities. This document provides an initial gap analysis in the area of community sustainability decision support tools. It provides a reference to communities for existing decision support tools, and a set of gaps for those wishing to develop additional needed tools to help communities to achieve sustainability. It contributes to SHC 1.61.4

  16. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA

    PubMed Central

    Wright, Imogen A.; Travers, Simon A.

    2014-01-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618

  17. Use of statecharts in the modelling of dynamic behaviour in the ATLAS DAQ prototype-1

    NASA Astrophysics Data System (ADS)

    Croll, P.; Duval, P.-Y.; Jones, R.; Kolos, S.; Sari, R. F.; Wheeler, S.

    1998-08-01

    Many applications within the ATLAS DAQ prototype-1 system have complicated dynamic behaviour which can be successfully modelled in terms of states and transitions between states. Previously, state diagrams implemented as finite-state machines have been used. Although effective, they become ungainly as system size increases. Harel statecharts address this problem by implementing additional features such as hierarchy and concurrency. The CHSM object-oriented language system is freeware which implements Harel statecharts as concurrent, hierarchical, finite-state machines (CHSMs). An evaluation of this language system by the ATLAS DAQ group has shown it to be suitable for describing the dynamic behaviour of typical DAQ applications. The language is currently being used to model the dynamic behaviour of the prototype-1 run-control system. The design is specified by means of a CHSM description file, and C++ code is obtained by running the CHSM compiler on the file. In parallel with the modelling work, a code generator has been developed which translates statecharts, drawn using the StP CASE tool, into the CHSM language. C++ code, describing the dynamic behaviour of the run-control system, has been successfully generated directly from StP statecharts using the CHSM generator and compiler. The validity of the design was tested using the simulation features of the Statemate CASE tool.

  18. PC graphics generation and management tool for real-time applications

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1992-01-01

    A graphics tool was designed and developed for easy generation and management of personal computer graphics. It also provides methods and 'run-time' software for many common artificial intelligence (AI) or expert system (ES) applications.

  19. The Development and Preliminary Testing of an Instrument for Assessing Fatigue Self-management Outcomes in Patients With Advanced Cancer.

    PubMed

    Chan, Raymond Javan; Yates, Patsy; McCarthy, Alexandra L

    Fatigue is one of the most distressing and commonly experienced symptoms in patients with advanced cancer. Although the self-management (SM) of cancer-related symptoms has received increasing attention, no research instrument assessing fatigue SM outcomes for patients with advanced cancer is available. The aim of this study was to describe the development and preliminary testing of an interviewer-administered instrument for assessing the frequency and perceived levels of effectiveness and self-efficacy associated with fatigue SM behaviors in patients with advanced cancer. The development and testing of the Self-efficacy in Managing Symptoms Scale-Fatigue Subscale for Patients With Advanced Cancer (SMSFS-A) involved a number of procedures: item generation using a comprehensive literature review and semistructured interviews, content validity evaluation using expert panel reviews, and face validity and test-retest reliability evaluation using pilot testing. Initially, 23 items (22 specific behaviors with 1 global item) were generated from the literature review and semistructured interviews. After 2 rounds of expert panel review, the final scale was reduced to 17 items (16 behaviors with 1 global item). Participants in the pilot test (n = 10) confirmed that the questions in this scale were clear and easy to understand. Bland-Altman analysis showed agreement of results over a 1-week interval. The SMSFS-A items were generated using multiple sources. This tool demonstrated preliminary validity and reliability. The SMSFS-A has the potential to be used for clinical and research purposes. Nurses can use this instrument for collecting data to inform the initiation of appropriate fatigue SM support for this population.

  20. A Comparison of Methods for Assessing Space Suit Joint Ranges of Motion

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay T.

    2012-01-01

    Through the Advanced Exploration Systems (AES) Program, NASA is attempting to use the vast collection of space suit mobility data from 50 years worth of space suit testing to build predictive analysis tools to aid in early architecture decisions for future missions and exploration programs. However, the design engineers must first understand if and how data generated by different methodologies can be compared directly and used in an essentially interchangeable manner. To address this question, the isolated joint range of motion data from two different test series were compared. Both data sets were generated from participants wearing the Mark III Space Suit Technology Demonstrator (MK-III), Waist Entry I-suit (WEI), and minimal clothing. Additionally the two tests shared a common test subject that allowed for within subject comparisons of the methods that greatly reduced the number of variables in play. The tests varied in their methodologies: the Space Suit Comparative Technologies Evaluation used 2-D photogrammetry to analyze isolated ranges of motion while the Constellation space suit benchmarking and requirements development used 3-D motion capture to evaluate both isolated and functional joint ranges of motion. The isolated data from both test series were compared graphically, as percent differences, and by simple statistical analysis. The results indicated that while the methods generate results that are statistically the same (significance level p= 0.01), the differences are significant enough in the practical sense to make direct comparisons ill advised. The concluding recommendations propose direction for how to bridge the data gaps and address future mobility data collection to allow for backward compatibility.

Top