Sample records for text analysis software

  1. Learn by Yourself: The Self-Learning Tools for Qualitative Analysis Software Packages

    ERIC Educational Resources Information Center

    Freitas, Fábio; Ribeiro, Jaime; Brandão, Catarina; Reis, Luís Paulo; de Souza, Francislê Neri; Costa, António Pedro

    2017-01-01

    Computer Assisted Qualitative Data Analysis Software (CAQDAS) are tools that help researchers to develop qualitative research projects. These software packages help the users with tasks such as transcription analysis, coding and text interpretation, writing and annotation, content search and analysis, recursive abstraction, grounded theory…

  2. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  3. Software for the Humanities and Social Sciences.

    ERIC Educational Resources Information Center

    National Collegiate Software Clearinghouse, Raleigh, NC.

    This computer software program bibliography available from the National Collegiate Software Clearinghouse (NCSC) includes programs for college students and researchers in anthropology, economics and business, education, English and text analysis, foreign language, general interest, history, management, philosophy and religion, political science,…

  4. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  5. The Case for Open Source Software: The Interactional Discourse Lab

    ERIC Educational Resources Information Center

    Choi, Seongsook

    2016-01-01

    Computational techniques and software applications for the quantitative content analysis of texts are now well established, and many qualitative data software applications enable the manipulation of input variables and the visualization of complex relations between them via interactive and informative graphical interfaces. Although advances in…

  6. Teaching Visual Texts with the Multimodal Analysis Software

    ERIC Educational Resources Information Center

    Lim Fei, Victor; O'Halloran, Kay L.; Tan, Sabine; E., Marissa K. L.

    2015-01-01

    This exploratory study introduces the systemic approach and the explicit teaching of a meta-language to provide conceptual tools for students for the analysis and interpretation of multimodal texts. Equipping students with a set of specialised vocabulary with conventionalised meanings associated with specific choices in multimodal texts empowers…

  7. Kubios HRV--heart rate variability analysis software.

    PubMed

    Tarvainen, Mika P; Niskanen, Juha-Pekka; Lipponen, Jukka A; Ranta-Aho, Perttu O; Karjalainen, Pasi A

    2014-01-01

    Kubios HRV is an advanced and easy to use software for heart rate variability (HRV) analysis. The software supports several input data formats for electrocardiogram (ECG) data and beat-to-beat RR interval data. It includes an adaptive QRS detection algorithm and tools for artifact correction, trend removal and analysis sample selection. The software computes all the commonly used time-domain and frequency-domain HRV parameters and several nonlinear parameters. There are several adjustable analysis settings through which the analysis methods can be optimized for different data. The ECG derived respiratory frequency is also computed, which is important for reliable interpretation of the analysis results. The analysis results can be saved as an ASCII text file (easy to import into MS Excel or SPSS), Matlab MAT-file, or as a PDF report. The software is easy to use through its compact graphical user interface. The software is available free of charge for Windows and Linux operating systems at http://kubios.uef.fi. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Development of MATLAB software to control data acquisition from a multichannel systems multi-electrode array.

    PubMed

    Messier, Erik

    2016-08-01

    A Multichannel Systems (MCS) microelectrode array data acquisition (DAQ) unit is used to collect multichannel electrograms (EGM) from a Langendorff perfused rabbit heart system to study sudden cardiac death (SCD). MCS provides software through which data being processed by the DAQ unit can be displayed and saved, but this software's combined utility with MATLAB is not very effective. MCSs software stores recorded EGM data in a MathCad (MCD) format, which is then converted to a text file format. These text files are very large, and it is therefore very time consuming to import the EGM data into MATLAB for real-time analysis. Therefore, customized MATLAB software was developed to control the acquisition of data from the MCS DAQ unit, and provide specific laboratory accommodations for this study of SCD. The developed DAQ unit control software will be able to accurately: provide real time display of EGM signals; record and save EGM signals in MATLAB in a desired format; and produce real time analysis of the EGM signals; all through an intuitive GUI.

  9. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  10. SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.; Bohn, S.; Crow, V.

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  11. A Requirements-Based Exploration of Open-Source Software Development Projects--Towards a Natural Language Processing Software Analysis Framework

    ERIC Educational Resources Information Center

    Vlas, Radu Eduard

    2012-01-01

    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages ofmore » information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.« less

  13. Capi text V.1--data analysis software for nailfold skin capillaroscopy.

    PubMed

    Dobrev, Hristo P

    2007-01-01

    Nailfold skin capillaroscopy is a simple non-invasive method used to assess conditions of disturbed microcirculation such as Raynaud's phenomenon, acrocyanosis, perniones, connective tissue diseases, psoriasis, diabetes mellitus, neuropathy and vibration disease. To develop data analysis software aimed at assisting the documentation and analysis of a capillaroscopic investigation. SOFTWARE DESCRIPTION: The programme is based on a modular principle. The module "Nomenclatures" includes menus for the patients' data. The module "Examinations" includes menus for all general and specific aspects of the medical examination and capillaroscopic investigations. The modules "Settings" and "Information" include customization menus for the programme. The results of nailfold capillaroscopy can be printed in a short or expanded form. This software allows physicians to perform quick search by using various specified criteria and prepare analyses and reports. This software programme will facilitate any practitioner who performs nailfold skin capillaroscopy.

  14. Computer-Assisted Text-Analysis for ESL Students.

    ERIC Educational Resources Information Center

    Reid, Joy; And Others

    1983-01-01

    Reports an investigation into possibilities of using word processors and text analysis software with English as second language (ESL) students to determine (1) if foreign students can learn to use computer equipment, (2) if students feel time invested is worthwhile, and (3) if ESL students' problems with writing American academic prose can be…

  15. Voice Recognition Software Accuracy with Second Language Speakers of English.

    ERIC Educational Resources Information Center

    Coniam, D.

    1999-01-01

    Explores the potential of the use of voice-recognition technology with second-language speakers of English. Involves the analysis of the output produced by a small group of very competent second-language subjects reading a text into the voice recognition software Dragon Systems "Dragon NaturallySpeaking." (Author/VWL)

  16. Public Domain Generic Tools: An Overview.

    ERIC Educational Resources Information Center

    Erjavec, Tomaz

    This paper presents an introduction to language engineering software, especially for computerized language and text corpora. The focus of the paper is on small and relatively independent pieces of software designed for specific, often low-level language analysis tasks, and on tools in the public domain. Discussion begins with the application of…

  17. Does Use of Text-to-Speech and Related Read-Aloud Tools Improve Reading Comprehension for Students with Reading Disabilities? A Meta-Analysis

    ERIC Educational Resources Information Center

    Wood, Sarah G.; Moxley, Jerad H.; Tighe, Elizabeth L.; Wagner, Richard K.

    2018-01-01

    Text-to-speech and related read-aloud tools are being widely implemented in an attempt to assist students' reading comprehension skills. Read-aloud software, including text-to-speech, is used to translate written text into spoken text, enabling one to listen to written text while reading along. It is not clear how effective text-to-speech is at…

  18. Impact of Machine-Translated Text on Entity and Relationship Extraction

    DTIC Science & Technology

    2014-12-01

    20 1 1. Introduction Using social network analysis tools is an important asset in...semantic modeling software to automatically build detailed network models from unstructured text. Contour imports unstructured text and then maps the text...onto an existing ontology of frames at the sentence level, using FrameNet, a structured language model, and through Semantic Role Labeling ( SRL

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The plpdfa software is a product of an LDRD project at LLNL entitked "Adaptive Sampling for Very High Throughput Data Streams" (tracking number 11-ERD-035). This software was developed by a graduate student summer intern, Chris Challis, who worked under project PI Dan Merl furing the summer of 2011. The software the source code is implementing is a statistical analysis technique for clustering and classification of text-valued data. The method had been previously published by the PI in the open literature.

  20. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  1. The potential of latent semantic analysis for machine grading of clinical case summaries.

    PubMed

    Kintsch, Walter

    2002-02-01

    This paper introduces latent semantic analysis (LSA), a machine learning method for representing the meaning of words, sentences, and texts. LSA induces a high-dimensional semantic space from reading a very large amount of texts. The meaning of words and texts can be represented as vectors in this space and hence can be compared automatically and objectively. A generative theory of the mental lexicon based on LSA is described. The word vectors LSA constructs are context free, and each word, irrespective of how many meanings or senses it has, is represented by a single vector. However, when a word is used in different contexts, context appropriate word senses emerge. Several applications of LSA to educational software are described, involving the ability of LSA to quickly compare the content of texts, such as an essay written by a student and a target essay. An LSA-based software tool is sketched for machine grading of clinical case summaries written by medical students.

  2. Design of a Syntax Validation Tool for Requirements Analysis Using Structured Analysis and Design Technique (SADT)

    DTIC Science & Technology

    1988-09-01

    analysis phase of the software life cycle (16:1-1). While editing a SADT diagram, the tool should be able to check whether or not structured analysis...diag-ams are valid for the SADT’s syntax, produce error messages, do error recovery, and perform editing suggestions. Thus, this tool must have the...directed editors are editors which use the syn- tax of the programming language while editing a program. While text editors treat programs as text, syntax

  3. COMPASS: a suite of pre- and post-search proteomics software tools for OMSSA

    PubMed Central

    Wenger, Craig D.; Phanstiel, Douglas H.; Lee, M. Violet; Bailey, Derek J.; Coon, Joshua J.

    2011-01-01

    Here we present the Coon OMSSA Proteomic Analysis Software Suite (COMPASS): a free and open-source software pipeline for high-throughput analysis of proteomics data, designed around the Open Mass Spectrometry Search Algorithm. We detail a synergistic set of tools for protein database generation, spectral reduction, peptide false discovery rate analysis, peptide quantitation via isobaric labeling, protein parsimony and protein false discovery rate analysis, and protein quantitation. We strive for maximum ease of use, utilizing graphical user interfaces and working with data files in the original instrument vendor format. Results are stored in plain text comma-separated values files, which are easy to view and manipulate with a text editor or spreadsheet program. We illustrate the operation and efficacy of COMPASS through the use of two LC–MS/MS datasets. The first is a dataset of a highly annotated mixture of standard proteins and manually validated contaminants that exhibits the identification workflow. The second is a dataset of yeast peptides, labeled with isobaric stable isotope tags and mixed in known ratios, to demonstrate the quantitative workflow. For these two datasets, COMPASS performs equivalently or better than the current de facto standard, the Trans-Proteomic Pipeline. PMID:21298793

  4. Reading Guided by Automated Graphical Representations: How Model-Based Text Visualizations Facilitate Learning in Reading Comprehension Tasks

    ERIC Educational Resources Information Center

    Pirnay-Dummer, Pablo; Ifenthaler, Dirk

    2011-01-01

    Our study integrates automated natural language-oriented assessment and analysis methodologies into feasible reading comprehension tasks. With the newly developed T-MITOCAR toolset, prose text can be automatically converted into an association net which has similarities to a concept map. The "text to graph" feature of the software is based on…

  5. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  6. Development of a User Interface for a Regression Analysis Software Tool

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.

  7. Learning about Language and Learners from Computer Programs

    ERIC Educational Resources Information Center

    Cobb, Tom

    2010-01-01

    Making Nation's text analysis software accessible via the World Wide Web has opened up an exploration of how his learning principles can best be realized in practice. This paper discusses 3 representative episodes in the ongoing exploration. The first concerns an examination of the assumptions behind modeling what texts look like to learners with…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahanani, Nursinta Adi, E-mail: sintaadi@batan.go.id; Natsir, Khairina, E-mail: sintaadi@batan.go.id; Hartini, Entin, E-mail: sintaadi@batan.go.id

    Data processing software packages such as VSOP and MCNPX are softwares that has been scientifically proven and complete. The result of VSOP and MCNPX are huge and complex text files. In the analyze process, user need additional processing like Microsoft Excel to show informative result. This research develop an user interface software for output of VSOP and MCNPX. VSOP program output is used to support neutronic analysis and MCNPX program output is used to support burn-up analysis. Software development using iterative development methods which allow for revision and addition of features according to user needs. Processing time with this softwaremore » 500 times faster than with conventional methods using Microsoft Excel. PYTHON is used as a programming language, because Python is available for all major operating systems: Windows, Linux/Unix, OS/2, Mac, Amiga, among others. Values that support neutronic analysis are k-eff, burn-up and mass Pu{sup 239} and Pu{sup 241}. Burn-up analysis used the mass inventory values of actinide (Thorium, Plutonium, Neptunium and Uranium). Values are visualized in graphical shape to support analysis.« less

  9. CWA 15793 2011 Planning and Implementation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, Alan; Nail, George

    This software, built on an open source platform called Electron (runs on Chromium and Node.js), is designed to assist organizations in the implementation of a biorisk management system consistent with the requirements of the international, publicly available guidance document CEN Workshop Agreement 15793:2011 (CWA 15793). The software includes tools for conducting organizational gap analysis against CWA 15793 requirements, planning tools to support the implementation of CWA 15793 requirements, and performance monitoring support. The gap analysis questions are based on the text of CWA 15793, and its associated guidance document, CEN Workshop Agreement 16393:2012. The authors have secured permission from themore » publisher of CWA 15793, the European Committee for Standardization (CEN), to use language from the document in the software, with the understanding that the software will be made available freely, without charge.« less

  10. Communicating headings and preview sentences in text and speech.

    PubMed

    Lorch, Robert F; Chen, Hung-Tao; Lemarié, Julie

    2012-09-01

    Two experiments tested the effects of preview sentences and headings on the quality of college students' outlines of informational texts. Experiment 1 found that performance was much better in the preview sentences condition than in a no-signals condition for both printed text and text-to-speech (TTS) audio rendering of the printed text. In contrast, performance in the headings condition was good for the printed text but poor for the auditory presentation because the TTS software failed to communicate nonverbal information carried by the visual headings. Experiment 2 compared outlining performance for five headings conditions during TTS presentation. Using a theoretical framework, "signaling available, relevant, accessible" (SARA) information, to provide an analysis of the information content of headings in the printed text, the manipulation of the headings systematically restored information that was omitted by the TTS application in Experiment 1. The result was that outlining performance improved to levels similar to the visual headings condition of Experiment 1. It is argued that SARA is a useful framework for guiding future development of TTS software for a wide variety of text signaling devices, not just headings.

  11. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  12. 78 FR 32169 - Facilitating the Deployment of Text-to-911 and Other Next Generation 911 Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... messaging services (i.e., all providers of software applications that enable a consumer to send text... messaging services (i.e., all providers of software applications that enable a consumer to send text... providers of interconnected text messaging services (i.e., all providers of software applications that...

  13. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  14. On the Use of Interactive Texts in Undergraduate Chemical Reaction Engineering Courses: A Pedagogical Experience

    ERIC Educational Resources Information Center

    Asensio, Daniela A.; Barassi, Francisca J.; Zambon, Mariana T.; Mazza, Germán D.

    2010-01-01

    This paper describes the results of a pedagogical experience carried out at the University of Comahue, Argentina, with an interactive text (IT) concerning Homogeneous Chemical Reactors Analysis. The IT was built on the frame of the "Mathematica" software with the aim of providing students with a robust computational tool. Students'…

  15. Plagiarism: a case study of quality improvement in a taught postgraduate programme.

    PubMed

    Marshall, Tom; Taylor, Beck; Hothersall, Ellie; Pérez-Martín, Leticia

    2011-01-01

    Plagiarism is a common issue in education. Software can detect plagiarism but little is known about prevention. To identify ways to reduce the incidence of plagiarism in a postgraduate programme. From 2006, all student assignments were monitored using plagiarism detection software (Turn It In) to produce percentage text matches for each assignment. In 2007, students were advised software was being used, and that plagiarism would result in penalties. In 2008, students attending a key module took part in an additional interactive seminar on plagiarism. A separate cohort of students did not attend the seminar, allowing comparison between attendees and non-attendees. Between 2006 and 2007, mean percentage text match values were consistent with a stable process, indicating advice and warnings were ineffective. Control chart analysis revealed that between 2007 and 2008, mean percentage text match changes showed a reduced text match in all nine modules, where students attended the interactive seminar, but none where students did not. This indicated that the interactive seminar had an effect. In 2008, there were no occurrences of plagiarism. Improvements were maintained in 2009. Advice and warnings against plagiarism were ineffective but a subsequent interactive seminar was effective at reducing plagiarism.

  16. Review of CAI Materials.

    ERIC Educational Resources Information Center

    McCrary, Ronald G.

    A discussion of computer software and courseware for second-language instruction outlines considerations for selecting software of various kinds and presents a list of selected computer programs. Suggestions are made for choosing text-specific software, non-text-specific software intended for language instruction, word processors intended for…

  17. ImatraNMR: Novel software for batch integration and analysis of quantitative NMR spectra

    NASA Astrophysics Data System (ADS)

    Mäkelä, A. V.; Heikkilä, O.; Kilpeläinen, I.; Heikkinen, S.

    2011-08-01

    Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D 1H and 13C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request.

  18. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    NASA Astrophysics Data System (ADS)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  19. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  20. A book review of Spatial data analysis in ecology and agriculture using R

    USDA-ARS?s Scientific Manuscript database

    Spatial Data Analysis in Ecology and Agriculture Using R is a valuable resource to assist agricultural and ecological researchers with spatial data analyses using the R statistical software(www.r-project.org). Special emphasis is on spatial data sets; how-ever, the text also provides ample guidance ...

  1. Microcomputers, Software and Foreign Languages for Special Purposes: An Analysis of TXTPRO.

    ERIC Educational Resources Information Center

    Tang, Michael S.

    TXTPRO, a computer program developed as a graduate-level research tool for descriptive linguistic analysis, produces simple alphabetic and word frequency lists, analyzes word combinations, and develops concordances. With modifications, a teacher could enter the program into a mainframe or a microcomputer and use it for text analyses to develop…

  2. Evidence synthesis software.

    PubMed

    Park, Sophie Elizabeth; Thomas, James

    2018-06-07

    It can be challenging to decide which evidence synthesis software to choose when doing a systematic review. This article discusses some of the important questions to consider in relation to the chosen method and synthesis approach. Software can support researchers in a range of ways. Here, a range of review conditions and software solutions. For example, facilitating contemporaneous collaboration across time and geographical space; in-built bias assessment tools; and line-by-line coding for qualitative textual analysis. EPPI-Reviewer is a review software for research synthesis managed by the EPPI-centre, UCL Institute of Education. EPPI-Reviewer has text mining automation technologies. Version 5 supports data sharing and re-use across the systematic review community. Open source software will soon be released. EPPI-Centre will continue to offer the software as a cloud-based service. The software is offered via a subscription with a one-month (extendible) trial available and volume discounts for 'site licences'. It is free to use for Cochrane and Campbell reviews. The next EPPI-Reviewer version is being built in collaboration with National Institute for Health and Care Excellence using 'surveillance' of newly published research to support 'living' iterative reviews. This is achieved using a combination of machine learning and traditional information retrieval technologies to identify the type of research each new publication describes and determine its relevance for a particular review, domain or guideline. While the amount of available knowledge and research is constantly increasing, the ways in which software can support the focus and relevance of data identification are also developing fast. Software advances are maximising the opportunities for the production of relevant and timely reviews. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Evaluation of three electronic report processing systems for preparing hydrologic reports of the U.S Geological Survey, Water Resources Division

    USGS Publications Warehouse

    Stiltner, G.J.

    1990-01-01

    In 1987, the Water Resources Division of the U.S. Geological Survey undertook three pilot projects to evaluate electronic report processing systems as a means to improve the quality and timeliness of reports pertaining to water resources investigations. The three projects selected for study included the use of the following configuration of software and hardware: Ventura Publisher software on an IBM model AT personal computer, PageMaker software on a Macintosh computer, and FrameMaker software on a Sun Microsystems workstation. The following assessment criteria were to be addressed in the pilot studies: The combined use of text, tables, and graphics; analysis of time; ease of learning; compatibility with the existing minicomputer system; and technical limitations. It was considered essential that the camera-ready copy produced be in a format suitable for publication. Visual improvement alone was not a consideration. This report consolidates and summarizes the findings of the electronic report processing pilot projects. Text and table files originating on the existing minicomputer system were successfully transformed to the electronic report processing systems in American Standard Code for Information Interchange (ASCII) format. Graphics prepared using a proprietary graphics software package were transferred to all the electronic report processing software through the use of Computer Graphic Metafiles. Graphics from other sources were entered into the systems by scanning paper images. Comparative analysis of time needed to process text and tables by the electronic report processing systems and by conventional methods indicated that, although more time is invested in creating the original page composition for an electronically processed report , substantial time is saved in producing subsequent reports because the format can be stored and re-used by electronic means as a template. Because of the more compact page layouts, costs of printing the reports were 15% to 25% less than costs of printing the reports prepared by conventional methods. Because the largest report workload in the offices conducting water resources investigations is preparation of Water-Resources Investigations Reports, Open-File Reports, and annual State Data Reports, the pilot studies only involved these projects. (USGS)

  4. ImatraNMR: novel software for batch integration and analysis of quantitative NMR spectra.

    PubMed

    Mäkelä, A V; Heikkilä, O; Kilpeläinen, I; Heikkinen, S

    2011-08-01

    Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D (1)H and (13)C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Using open captions to revise writing in digital stories composed by d/Deaf and hard of hearing students.

    PubMed

    Strassman, Barbara K; O'Dell, Katie

    2012-01-01

    Using a nonexperimental design, the researchers explored the effect of captioning as part of the writing process of individuals who are d/Deaf and hard of hearing. Sixty-nine d/Deaf and hard of hearing middle school students composed responses to four writing-to-learn activities in a word processor. Two compositions were revised and published with software that displayed texts as captions to digital images; two compositions were revised with a word processor and published on paper. Analysis showed increases in content-area vocabulary, text length, and inclusion of main ideas and details for texts revised in the captioning software. Given the nonexperimental design, it is not possible to determine the extent to which the results could be attributed to captioned revisions. However, the findings do suggest that the images acted as procedural facilitators, triggering recall of vocabulary and details.

  6. Assessing the Lexico-Grammatical Characteristics of a Corpus of College-Level Statistics Textbooks: Implications for Instruction and Practice

    ERIC Educational Resources Information Center

    Wagler, Amy E.; Lesser, Lawrence M.; González, Ariel I.; Leal, Luis

    2015-01-01

    A corpus of current editions of statistics textbooks was assessed to compare aspects and levels of readability for the topics of "measures of center," "line of fit," "regression analysis," and "regression inference." Analysis with lexical software of these text selections revealed that the large corpus can…

  7. Increasing the trustworthiness of research results: the role of computers in qualitative text analysis

    Treesearch

    Lynne M. Westphal

    2000-01-01

    By using computer packages designed for qualitative data analysis a researcher can increase trustworthiness (i.e., validity and reliability) of conclusions drawn from qualitative research results. This paper examines trustworthiness issues and therole of computer software (QSR's NUD*IST) in the context of a current research project investigating the social...

  8. Ada (Trade Name) Foundation Technology. Volume 4. Software Requirements for WIS (WWMCCS (World Wide Military Command and Control System) Information System) Text Processing Prototypes

    DTIC Science & Technology

    1986-12-01

    graphics : The package allows a character set which can be defined by users giving the picture for a character by designating its pixels. Such characters...type lonts and gsei-oriented "help" messages tailored to the operations being performed and user expertise In general, critical design issues...other volumes include command language, software design , description and analysis tools, database management system operating systems; planning and

  9. An Investigation of the Effects of Reader Characteristics on Reading Comprehension Of a General Chemistry Text

    NASA Astrophysics Data System (ADS)

    Neiles, Kelly Y.

    There is great concern in the scientific community that students in the United States, when compared with other countries, are falling behind in their scientific achievement. Increasing students' reading comprehension of scientific text may be one of the components involved in students' science achievement. To investigate students' reading comprehension this quantitative study examined the effects of different reader characteristics, namely, students' logical reasoning ability, factual chemistry knowledge, working memory capacity, and schema of the chemistry concepts, on reading comprehension of a chemistry text. Students' reading comprehension was measured through their ability to encode the text, access the meanings of words (lexical access), make bridging and elaborative inferences, and integrate the text with their existing schemas to make a lasting mental representation of the text (situational model). Students completed a series of tasks that measured the reader characteristic and reading comprehension variables. Some of the variables were measured using new technologies and software to investigate different cognitive processes. These technologies and software included eye tracking to investigate students' lexical accessing and a Pathfinder program to investigate students' schema of the chemistry concepts. The results from this study were analyzed using canonical correlation and regression analysis. The canonical correlation analysis allows for the ten variables described previously to be included in one multivariate analysis. Results indicate that the relationship between the reader characteristic variables and the reading comprehension variables is significant. The resulting canonical function accounts for a greater amount of variance in students' responses then any individual variable. Regression analysis was used to further investigate which reader characteristic variables accounted for the differences in students' responses for each reading comprehension variable. The results from this regression analysis indicated that the two schema measures (measured by the Pathfinder program) accounted for the greatest amount of variance in four of the reading comprehension variables (encoding the text, bridging and elaborative inferences, and delayed recall of a general summary). This research suggest that providing students with background information on chemistry concepts prior to having them read the text may result in better understanding and more effective incorporation of the chemistry concepts into their schema.

  10. DAKOTA JAGUAR 3.0 user's manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Bauman, Lara E; Chan, Ethan

    2013-05-01

    JAGUAR (JAva GUi for Applied Research) is a Java software tool providing an advanced text editor and graphical user interface (GUI) to manipulate DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) input specifications. This document focuses on the features necessary to use JAGUAR.

  11. JAGUAR developer's manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Ethan

    2011-06-01

    JAGUAR (JAva GUi for Applied Research) is a Java software tool providing an advanced text editor and graphical user interface (GUI) to manipulate DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) input specifications. This document focuses on the technical background necessary for a developer to understand JAGUAR.

  12. "Look What I Did!": Student Conferences with Text-to-Speech Software

    ERIC Educational Resources Information Center

    Young, Chase; Stover, Katie

    2014-01-01

    The authors describe a strategy that empowers students to edit and revise their own writing. Students input their writing in to text-to-speech software that rereads the text aloud. While listening, students make necessary revisions and edits.

  13. Administrative/Office Technology. A Guide to Resources.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This guide, which was written for general marketing instructors in Ohio, lists nearly 450 resources for use in conjunction with the Administrative/Office Technology Occupational Competency Analysis Profile. The texts, workbooks, modules, software, videos, and learning activities packets listed are categorized by the following topics:…

  14. General Marketing. A Guide to Resources.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Vocational Instructional Materials Lab.

    This guide, which was written for general marketing instructors in Ohio, lists more than 600 resources for use in conjunction with the General Marketing Occupational Competency Analysis Profile. The texts, workbooks, modules, software, videos, and learning activities packets listed are categorized by the following topics: human resource…

  15. What's in a story? A text analysis of burn survivors' web-posted narratives.

    PubMed

    Badger, Karen; Royse, David; Moore, Kelly

    2011-01-01

    Story-telling has been found to be beneficial following trauma, suggesting a potential intervention for burn survivors who frequently make use of? telling their story? as part of their recovery. This study is the first to examine the word content of burn survivors' Web-posted narratives to explore their perceptions of the event, supportive resources, their post-burn well-being, and re-integration using a comparison group and a text data analysis software developed by the widely recognized James Pennebaker. Suggestions for using expressive writing or story-telling as a guided psychosocial intervention with burn survivors are made.

  16. Teaching Radiology Physics Interactively with Scientific Notebook Software.

    PubMed

    Richardson, Michael L; Amini, Behrang

    2018-06-01

    The goal of this study is to demonstrate how the teaching of radiology physics can be enhanced with the use of interactive scientific notebook software. We used the scientific notebook software known as Project Jupyter, which is free, open-source, and available for the Macintosh, Windows, and Linux operating systems. We have created a scientific notebook that demonstrates multiple interactive teaching modules we have written for our residents using the Jupyter notebook system. Scientific notebook software allows educators to create teaching modules in a form that combines text, graphics, images, data, interactive calculations, and image analysis within a single document. These notebooks can be used to build interactive teaching modules, which can help explain complex topics in imaging physics to residents. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  17. Academic integrity and plagiarism: perceptions and experience of staff and students in a school of dentistry: a situational analysis of staff and student perspectives.

    PubMed

    Ford, P J; Hughes, C

    2012-02-01

    This project has investigated student and staff perceptions and experience of plagiarism in a large Australian dental school to develop a response to an external audit report. Workshops designed to enhance participants' understanding of plagiarism and to assist with practical ways to promote academic integrity within the school were provided to all students and staff. Anonymous surveys were used to investigate perceptions and experience of plagiarism and to assess the usefulness of the workshops. Most participants felt that plagiarism was not a problem in the school, but a significant number were undecided. The majority of participants reported that the guidelines for dealing with plagiarism were inadequate and most supported the mandatory use of text-matching software in all courses. High proportions of participants indicated that the workshops were useful and that they would consider improving their practice as a result. The study provided data that enhanced understanding of aspects of plagiarism highlighted in the report at the school level and identified areas in need of attention, such as refining and raising awareness of the guidelines and incorporation of text-matching software into courses, as well as cautions to be considered (how text-matching software is used) in planning responsive action. © 2011 John Wiley & Sons A/S.

  18. [Is there protection against copying? Thoughts about plagiarism].

    PubMed

    Schubert, András; Glänzel, Wolfgang

    2015-12-13

    There are at least two reasons why more and more cases of suspected plagiarism are perceived in the scientific literature. On one hand, the ever increasing pressure for publication makes it easier for authors, reviewers and editors to infringe or overlook this serious ethical misdemeanor; on the other hand, with the development of text analysis software, detecting text similarities has become a simple task. The judgement of actual cases, however, requires well-grounded professional knowledge and prudent human decisions.

  19. Software system for data management and distributed processing of multichannel biomedical signals.

    PubMed

    Franaszczuk, P J; Jouny, C C

    2004-01-01

    The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.

  20. Software Template for Instruction in Mathematics

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Moebes, Travis A.; Beall, Anna

    2005-01-01

    Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargupta, H.; Stafford, B.; Hamzaoglu, I.

    This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.

  2. Oak Ridge Computerized Hierarchical Information System (ORCHIS) status report, July 1973

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, A.A.

    1974-01-01

    This report summarizes the concepts, software, and contents of the Oak Ridge Computerized Hierarchical Information System. This data analysis and text processing system was developed as an integrated, comprehensive information processing capability to meet the needs of an on-going multidisciplinary research and development organization. (auth)

  3. Choosing and Using Text-to-Speech Software

    ERIC Educational Resources Information Center

    Peters, Tom; Bell, Lori

    2007-01-01

    This article describes a computer-based technology for generating speech called text-to-speech (TTS). This software is ready for widespread use by libraries, other organizations, and individual users. It offers the affordable ability to turn just about any electronic text that is not image-based into an artificially spoken communication. The…

  4. Development and prospective evaluation of an automated software system for quality control of quantitative 99mTc-MAG3 renal studies.

    PubMed

    Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T

    2007-03-01

    Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.

  5. IPAT: a freely accessible software tool for analyzing multiple patent documents with inbuilt landscape visualizer.

    PubMed

    Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T

    2015-01-01

    Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.

  6. Computerized literature reference system: use of an optical scanner and optical character recognition software.

    PubMed

    Lossef, S V; Schwartz, L H

    1990-09-01

    A computerized reference system for radiology journal articles was developed by using an IBM-compatible personal computer with a hand-held optical scanner and optical character recognition software. This allows direct entry of scanned text from printed material into word processing or data-base files. Additionally, line diagrams and photographs of radiographs can be incorporated into these files. A text search and retrieval software program enables rapid searching for keywords in scanned documents. The hand scanner and software programs are commercially available, relatively inexpensive, and easily used. This permits construction of a personalized radiology literature file of readily accessible text and images requiring minimal typing or keystroke entry.

  7. A METHODOLOGY FOR INTEGRATING IMAGES AND TEXT FOR OBJECT IDENTIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Hohimer, Ryan E.; Doucette, Peter J.

    2006-02-13

    Often text and imagery contain information that must be combined to solve a problem. One approach begins with transforming the raw text and imagery into a common structure that contains the critical information in a usable form. This paper presents an application in which the imagery of vehicles and the text from police reports were combined to demonstrate the power of data fusion to correctly identify the target vehicle--e.g., a red 2002 Ford truck identified in a police report--from a collection of diverse vehicle images. The imagery was abstracted into a common signature by first capturing the conceptual models ofmore » the imagery experts in software. Our system then (1) extracted fundamental features (e.g., wheel base, color), (2) made inferences about the information (e.g., it’s a red Ford) and then (3) translated the raw information into an abstract knowledge signature that was designed to both capture the important features and account for uncertainty. Likewise, the conceptual models of text analysis experts were instantiated into software that was used to generate an abstract knowledge signature that could be readily compared to the imagery knowledge signature. While this experiment primary focus was to demonstrate the power of text and imagery fusion for a specific example it also suggested several ways that text and geo-registered imagery could be combined to help solve other types of problems.« less

  8. Getting more out of biomedical documents with GATE's full lifecycle open source text analytics.

    PubMed

    Cunningham, Hamish; Tablan, Valentin; Roberts, Angus; Bontcheva, Kalina

    2013-01-01

    This software article describes the GATE family of open source text analysis tools and processes. GATE is one of the most widely used systems of its type with yearly download rates of tens of thousands and many active users in both academic and industrial contexts. In this paper we report three examples of GATE-based systems operating in the life sciences and in medicine. First, in genome-wide association studies which have contributed to discovery of a head and neck cancer mutation association. Second, medical records analysis which has significantly increased the statistical power of treatment/outcome models in the UK's largest psychiatric patient cohort. Third, richer constructs in drug-related searching. We also explore the ways in which the GATE family supports the various stages of the lifecycle present in our examples. We conclude that the deployment of text mining for document abstraction or rich search and navigation is best thought of as a process, and that with the right computational tools and data collection strategies this process can be made defined and repeatable. The GATE research programme is now 20 years old and has grown from its roots as a specialist development tool for text processing to become a rather comprehensive ecosystem, bringing together software developers, language engineers and research staff from diverse fields. GATE now has a strong claim to cover a uniquely wide range of the lifecycle of text analysis systems. It forms a focal point for the integration and reuse of advances that have been made by many people (the majority outside of the authors' own group) who work in text processing for biomedicine and other areas. GATE is available online <1> under GNU open source licences and runs on all major operating systems. Support is available from an active user and developer community and also on a commercial basis.

  9. Getting More Out of Biomedical Documents with GATE's Full Lifecycle Open Source Text Analytics

    PubMed Central

    Cunningham, Hamish; Tablan, Valentin; Roberts, Angus; Bontcheva, Kalina

    2013-01-01

    This software article describes the GATE family of open source text analysis tools and processes. GATE is one of the most widely used systems of its type with yearly download rates of tens of thousands and many active users in both academic and industrial contexts. In this paper we report three examples of GATE-based systems operating in the life sciences and in medicine. First, in genome-wide association studies which have contributed to discovery of a head and neck cancer mutation association. Second, medical records analysis which has significantly increased the statistical power of treatment/outcome models in the UK's largest psychiatric patient cohort. Third, richer constructs in drug-related searching. We also explore the ways in which the GATE family supports the various stages of the lifecycle present in our examples. We conclude that the deployment of text mining for document abstraction or rich search and navigation is best thought of as a process, and that with the right computational tools and data collection strategies this process can be made defined and repeatable. The GATE research programme is now 20 years old and has grown from its roots as a specialist development tool for text processing to become a rather comprehensive ecosystem, bringing together software developers, language engineers and research staff from diverse fields. GATE now has a strong claim to cover a uniquely wide range of the lifecycle of text analysis systems. It forms a focal point for the integration and reuse of advances that have been made by many people (the majority outside of the authors' own group) who work in text processing for biomedicine and other areas. GATE is available online <1> under GNU open source licences and runs on all major operating systems. Support is available from an active user and developer community and also on a commercial basis. PMID:23408875

  10. INSPECT: A graphical user interface software package for IDARC-2D

    NASA Astrophysics Data System (ADS)

    AlHamaydeh, Mohammad; Najib, Mohamad; Alawnah, Sameer

    Modern day Performance-Based Earthquake Engineering (PBEE) pivots about nonlinear analysis and its feasibility. IDARC-2D is a widely used and accepted software for nonlinear analysis; it possesses many attractive features and capabilities. However, it is operated from the command prompt in the DOS/Unix systems and requires elaborate text-based input files creation by the user. To complement and facilitate the use of IDARC-2D, a pre-processing GUI software package (INSPECT) is introduced herein. INSPECT is created in the C# environment and utilizes the .NET libraries and SQLite database. Extensive testing and verification demonstrated successful and high-fidelity re-creation of several existing IDARC-2D input files. Its design and built-in features aim at expediting, simplifying and assisting in the modeling process. Moreover, this practical aid enhances the reliability of the results and improves accuracy by reducing and/or eliminating many potential and common input mistakes. Such benefits would be appreciated by novice and veteran IDARC-2D users alike.

  11. Unobtrusive Monitoring of Spaceflight Team Functioning

    NASA Technical Reports Server (NTRS)

    Maidel, Veronica; Stanton, Jeffrey M.

    2010-01-01

    This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.

  12. Cross-instrument Analysis Correlation Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Timothy R.

    This program has been designed to assist with the tracking of a sample from one analytical instrument to another such as SEM, microscopes, micro x-ray diffraction and other instruments where particular positions/locations on the sample are examined, photographed, etc. The software is designed to easily enter the position of fiducials and locations of interest such that in a future session in the same of different instrument the positions of interest can be re-found through using the known location fiducials in the current and reference session to transform the point into the current sessions coordinate system. The software is dialog boxmore » driven guiding the user through the necessary data entry and program choices. Information is stored in a series of text based extensible markup language (XML) files.« less

  13. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  14. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    USGS Publications Warehouse

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  15. Authoritative Authoring: Software That Makes Multimedia Happen.

    ERIC Educational Resources Information Center

    Florio, Chris; Murie, Michael

    1996-01-01

    Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)

  16. Using health data: applying technology to work smarter Heather Grain Paula Procter Elsevier Using health data: applying technology to work smarter £20.99 276pp 9780729538893 0729538893 [Formula: see text].

    PubMed

    2010-10-07

    THE TITLE of this book may give the impression that it concerns the collection and analysis of healthcare data, but it is actually about how to use Microsoft office 2007 software in healthcare settings.

  17. Are cellular phone blocking applications effective for novice teen drivers?

    PubMed

    Creaser, Janet I; Edwards, Christopher J; Morris, Nichole L; Donath, Max

    2015-09-01

    Distracted driving is a significant concern for novice teen drivers. Although cellular phone bans are applied in many jurisdictions to restrict cellular phone use, teen drivers often report making calls and texts while driving. The Minnesota Teen Driver Study incorporated cellular phone blocking functions via a software application for 182 novice teen drivers in two treatment conditions. The first condition included 92 teens who ran a driver support application on a smartphone that also blocked phone usage. The second condition included 90 teens who ran the same application with phone blocking but which also reported back to parents about monitored risky behaviors (e.g., speeding). A third control group consisting of 92 novice teen drivers had the application and phone-based software installed on the phones to record cellular phone (but not block it) use while driving. The two treatment groups made significantly fewer calls and texts per mile driven compared to the control group. The control group data also demonstrated a higher propensity to text while driving rather than making calls. Software that blocks cellular phone use (except 911) while driving can be effective at mitigating calling and texting for novice teen drivers. However, subjective data indicates that some teens were motivated to find ways around the software, as well as to use another teen's phone while driving when they were unable to use theirs. Cellular phone bans for calling and texting are the first step to changing behaviors associated with texting and driving, particularly among novice teen drivers. Blocking software has the additional potential to reduce impulsive calling and texting while driving among novice teen drivers who might logically know the risks, but for whom it is difficult to ignore calling or texting while driving. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  18. Ambiguity and variability of database and software names in bioinformatics.

    PubMed

    Duck, Geraint; Kovacevic, Aleksandar; Robertson, David L; Stevens, Robert; Nenadic, Goran

    2015-01-01

    There are numerous options available to achieve various tasks in bioinformatics, but until recently, there were no tools that could systematically identify mentions of databases and tools within the literature. In this paper we explore the variability and ambiguity of database and software name mentions and compare dictionary and machine learning approaches to their identification. Through the development and analysis of a corpus of 60 full-text documents manually annotated at the mention level, we report high variability and ambiguity in database and software mentions. On a test set of 25 full-text documents, a baseline dictionary look-up achieved an F-score of 46 %, highlighting not only variability and ambiguity but also the extensive number of new resources introduced. A machine learning approach achieved an F-score of 63 % (with precision of 74 %) and 70 % (with precision of 83 %) for strict and lenient matching respectively. We characterise the issues with various mention types and propose potential ways of capturing additional database and software mentions in the literature. Our analyses show that identification of mentions of databases and tools is a challenging task that cannot be achieved by relying on current manually-curated resource repositories. Although machine learning shows improvement and promise (primarily in precision), more contextual information needs to be taken into account to achieve a good degree of accuracy.

  19. Electric Circuit Theory--Computer Illustrated Text.

    ERIC Educational Resources Information Center

    Riches, Brian

    1990-01-01

    Discusses the use of a computer-illustrated text (CIT) with integrated software to teach electric circuit theory to college students. Examples of software use are given, including simple animation, graphical displays, and problem-solving programs. Issues affecting electric circuit theory instruction are also addressed, including mathematical…

  20. Mining Bug Databases for Unidentified Software Vulnerabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Jason Wright

    2012-06-01

    Identifying software vulnerabilities is becoming more important as critical and sensitive systems increasingly rely on complex software systems. It has been suggested in previous work that some bugs are only identified as vulnerabilities long after the bug has been made public. These vulnerabilities are known as hidden impact vulnerabilities. This paper discusses the feasibility and necessity to mine common publicly available bug databases for vulnerabilities that are yet to be identified. We present bug database analysis of two well known and frequently used software packages, namely Linux kernel and MySQL. It is shown that for both Linux and MySQL, amore » significant portion of vulnerabilities that were discovered for the time period from January 2006 to April 2011 were hidden impact vulnerabilities. It is also shown that the percentage of hidden impact vulnerabilities has increased in the last two years, for both software packages. We then propose an improved hidden impact vulnerability identification methodology based on text mining bug databases, and conclude by discussing a few potential problems faced by such a classifier.« less

  1. OSCAR4: a flexible architecture for chemical text-mining.

    PubMed

    Jessop, David M; Adams, Sam E; Willighagen, Egon L; Hawizy, Lezan; Murray-Rust, Peter

    2011-10-14

    The Open-Source Chemistry Analysis Routines (OSCAR) software, a toolkit for the recognition of named entities and data in chemistry publications, has been developed since 2002. Recent work has resulted in the separation of the core OSCAR functionality and its release as the OSCAR4 library. This library features a modular API (based on reduction of surface coupling) that permits client programmers to easily incorporate it into external applications. OSCAR4 offers a domain-independent architecture upon which chemistry specific text-mining tools can be built, and its development and usage are discussed.

  2. Development of software and modification of Q-FISH protocol for estimation of individual telomere length in immunopathology.

    PubMed

    Barkovskaya, M Sh; Bogomolov, A G; Knauer, N Yu; Rubtsov, N B; Kozlov, V A

    2017-04-01

    Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C 3 TA[Formula: see text] peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.

  3. [Analysis on composition and medication regularities of prescriptions treating hypochondriac pain based on traditional Chinese medicine inheritance support system inheritance support platform].

    PubMed

    Zhao, Yan-qing; Teng, Jing

    2015-03-01

    To analyze the composition and medication regularities of prescriptions treating hypochondriac pain in Chinese journal full-text database (CNKI) based on the traditional Chinese medicine inheritance support system, in order to provide a reference for further research and development for new traditional Chinese medicines treating hypochondriac pain. The traditional Chinese medicine inheritance support platform software V2. 0 was used to build a prescription database of Chinese medicines treating hypochondriac pain. The software integration data mining method was used to distribute prescriptions according to "four odors", "five flavors" and "meridians" in the database and achieve frequency statistics, syndrome distribution, prescription regularity and new prescription analysis. An analysis were made for 192 prescriptions treating hypochondriac pain to determine the frequencies of medicines in prescriptions, commonly used medicine pairs and combinations and summarize 15 new prescriptions. This study indicated that the prescriptions treating hypochondriac pain in Chinese journal full-text database are mostly those for soothing liver-qi stagnation, promoting qi and activating blood, clearing heat and promoting dampness, and invigorating spleen and removing phlem, with a cold property and bitter taste, and reflect the principles of "distinguish deficiency and excess and relieving pain by smoothening meridians" in treating hypochondriac pain.

  4. MAMA User Guide v2.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaschen, Brian Keith; Bloch, Jeffrey Joseph; Porter, Reid

    Morphological signatures of bulk SNM materials have significant promise, but these potential signatures are not fully utilized. This document describes software tools, collectively called the MAMA (Morphological Analysis for Material Attribution) software that can help provide robust and accurate quantification of morphological features in bulk material microscopy images (Optical, SEM). Although many of the specific tools are not unique to Mama, the software package has been designed specifically for nuclear material morphological analysis, and is at a point where it can be easily adapted (by Los Alamos or by collaborators) in response to new, different, or changing forensics needs. Themore » current release of the MAMA software only includes the image quantification, descriptions, and annotation functionality. Only limited information on a sample, its pedigree, and its chemistry are recorded inside this part of the software. This was decision based on initial feedback and the fact that there are several analytical chemistry databases being developed within the community. Currently MAMA is a standalone program that can export quantification results in a basic text format that can be imported into other programs such as Excel and Access. There is also a basic report generating feature that produces HTML formatted pages of the same information. We will be working with collaborators to provide better integration of MAMA into their particular systems, databases and workflows.« less

  5. Two Strategies for Qualitative Content Analysis: An Intramethod Approach to Triangulation.

    PubMed

    Renz, Susan M; Carrington, Jane M; Badger, Terry A

    2018-04-01

    The overarching aim of qualitative research is to gain an understanding of certain social phenomena. Qualitative research involves the studied use and collection of empirical materials, all to describe moments and meanings in individuals' lives. Data derived from these various materials require a form of analysis of the content, focusing on written or spoken language as communication, to provide context and understanding of the message. Qualitative research often involves the collection of data through extensive interviews, note taking, and tape recording. These methods are time- and labor-intensive. With the advances in computerized text analysis software, the practice of combining methods to analyze qualitative data can assist the researcher in making large data sets more manageable and enhance the trustworthiness of the results. This article will describe a novel process of combining two methods of qualitative data analysis, or Intramethod triangulation, as a means to provide a deeper analysis of text.

  6. Tarjetas v.1.2015.7.23

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burchard, Ross L.; Pierson, Kathleen P.; Trumbo, Derek

    Tarjetas is used to generate requirements from source documents. These source documents are in a hierarchical XML format that have been produced from PDF documents processed through the “Reframe” software package. The software includes the ability to create Topics and associate text Snippets with those topics. Requirements are then generated and text Snippets with their associated Topics are referenced to the requirement. The software maintains traceability from the requirement ultimately to the source document that produced the snippet

  7. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  8. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  9. Principles and tools for collaborative entity-based intelligence analysis.

    PubMed

    Bier, Eric A; Card, Stuart K; Bodnar, John W

    2010-01-01

    Software tools that make it easier for analysts to collaborate as a natural part of their work will lead to better analysis that is informed by more perspectives. We are interested to know if software tools can be designed that support collaboration even as they allow analysts to find documents and organize information (including evidence, schemas, and hypotheses). We have modified the Entity Workspace system, described previously, to test such designs. We have evaluated the resulting design in both a laboratory study and a study where it is situated with an analysis team. In both cases, effects on collaboration appear to be positive. Key aspects of the design include an evidence notebook optimized for organizing entities (rather than text characters), information structures that can be collapsed and expanded, visualization of evidence that emphasizes events and documents (rather than emphasizing the entity graph), and a notification system that finds entities of mutual interest to multiple analysts. Long-term tests suggest that this approach can support both top-down and bottom-up styles of analysis.

  10. Mining the Geophysical Research Abstracts Corpus: Mapping the impact of Free and Open Source Software on the EGU Divisions

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Klump, Jens; Robertson, Jesse

    2015-04-01

    Text mining is commonly employed as a tool in data science to investigate and chart emergent information from corpora of research abstracts, such as the Geophysical Research Abstracts (GRA) published by Copernicus. In this context current standards, such as persistent identifiers like DOI and ORCID, allow us to trace, cite and map links between journal publications, the underlying research data and scientific software. This network can be expressed as a directed graph which enables us to chart networks of cooperation and innovation, thematic foci and the locations of research communities in time and space. However, this approach of data science, focusing on the research process in a self-referential manner, rather than the topical work, is still in a developing stage. Scientific work presented at the EGU General Assembly is often the first step towards new approaches and innovative ideas to the geospatial community. It represents a rich, deep and heterogeneous source of geoscientific thought. This corpus is a significant data source for data science, which has not been analysed on this scale previously. In this work, the corpus of the Geophysical Research Abstracts is used for the first time as a data base for analyses of topical text mining. For this, we used a sturdy and customizable software framework, based on the work of Schmitt et al. [1]. For the analysis we used the High Performance Computing infrastructure of the German Research Centre for Geosciences GFZ in Potsdam, Germany. Here, we report on the first results from the analysis of the continuous spreading the of use of Free and Open Source Software Tools (FOSS) within the EGU communities, mapping the general increase of FOSS-themed GRA articles in the last decade and the developing spatial patterns of involved parties and FOSS topics. References: [1] Schmitt, L. M., Christianson, K.T, Gupta R..: Linguistic Computing with UNIX Tools, in Kao, A., Poteet S.R. (Eds.): Natural Language processing and Text Mining, Springer, 2007. doi:10.1007/978-1-84628-754-1_12.

  11. Buyers Guide: Communications Software--Overview; Ratings Digest; Reviews; Benchmarks.

    ERIC Educational Resources Information Center

    Lockwood, Russ; And Others

    1988-01-01

    Contains articles which review communications software. Includes "Crosstalk Mark 4,""ProComm,""Freeway Advanced,""Windows InTalk,""Relay Silver," and "Smartcom III." Compares in terms of text proprietary, MCI upload, Test ASCII, Spreadsheet Proprietary, Text XMODEM, Spreadsheet XMODEM, MCI Download, Documentation, Support and Service, ease of use,…

  12. An Empirical Study of a District Wide K-2 Performance Assessment Program: Teacher Practices, Information Gained, and Use of Assessment Results.

    ERIC Educational Resources Information Center

    Lanting, Ashley

    This study used a qualitative research methodology to examine how four primary teachers used a district literacy performance assessment. Data were collected through observations, interviews, and documents. Grounded theory and NUD*IST software were used for text analysis and theory building. Findings show that a theory-grounded teacher-empowered…

  13. Exploring Adolescents' Multimodal Responses to "The Kite Runner": Understanding How Students Use Digital Media for Academic Purposes

    ERIC Educational Resources Information Center

    Jocius, Robin

    2013-01-01

    This qualitative study explores how adolescent high school students in an AP English class used multiple forms of media (the internet, digital video, slide show software, video editing tools, literary texts, and writing) to respond to and analyze a contemporary novel, "The Kite Runner". Using a multimodal analysis framework, the author explores…

  14. Illinois Trauma Centers and Intimate Partner Violence: Are We Doing Our Share?

    ERIC Educational Resources Information Center

    Crandall, Marie; Schwab, Jennifer; Sheehan, Karen; Esposito, Thomas

    2009-01-01

    Intimate partner violence (IPV) is a major source of morbidity and mortality nationally. Trauma Centers can be very helpful for victims of IPV but there may be variability in IPV resource provision. A survey was mailed to each of the 65 Trauma Centers in Illinois. Stata and EZ-Text statistical software were used for analysis. Eighty-three percent…

  15. Qualities and Characteristics in the Written Reports of Doctoral Thesis Examiners

    ERIC Educational Resources Information Center

    Holbrook, Allyson; Bourke, Sid; Lovat, Terence; Dally, Kerry

    2004-01-01

    This paper outlines the procedures used in the textual analysis of examiner reports for 101 PhD candidates across disciplines in one Australian University. The method involves the use of QSR software. Three levels of findings are outlined. The first level is the coding categories that emerged out of reading the report text. There are five broad…

  16. Desktop Publishing on the Macintosh: A Software Perspective.

    ERIC Educational Resources Information Center

    Devan, Steve

    1987-01-01

    Discussion of factors to be considered in selecting desktop publishing software for the Macintosh microcomputer focuses on the two approaches to such software, i.e., batch and interactive, and three technical considerations, i.e., document, text, and graphics capabilities. Some new developments in graphics software are also briefly described. (MES)

  17. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  18. Content analysis of cancer blog posts.

    PubMed

    Kim, Sujin

    2009-10-01

    The efficacy of user-defined subject tagging and software-generated subject tagging for describing and organizing cancer blog contents was explored. The Technorati search engine was used to search the blogosphere for cancer blog postings generated during a two-month period. Postings were mined for relevant subject concepts, and blogger-defined tags and Text Analysis Portal for Research (TAPoR) software-defined tags were generated for each message. Descriptive data were collected, and the blogger-defined tags were compared with software-generated tags. Three standard vocabularies (Opinion Templates, Basic Resource, and Medical Subject Headings [MeSH] Resource) were used to assign subject terms to the blogs, with results compared for efficacy in information retrieval. Descriptive data showed that most of the studied cancer blogs (80%) contained fewer than 500 words each. The numbers of blogger-defined tags per posting (M = 4.49 per posting) were significantly smaller than the TAPoR keywords (M = 23.55 per posting). Both blogger-defined subject tags and software-generated subject tags were often overly broad or overly narrow in focus, producing less than effective search results for those seeking to extract information from cancer blogs. Additional exploration into methods for systematically organizing cancer blog postings is necessary if blogs are to become stable and efficacious information resources for cancer patients, friends, families, or providers.

  19. A text input system developed by using lips image recognition based LabVIEW for the seriously disabled.

    PubMed

    Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M

    2004-01-01

    In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.

  20. Effects of text-to-speech software use on the reading proficiency of high school struggling readers.

    PubMed

    Park, Hye Jin; Takahashi, Kiriko; Roberts, Kelly D; Delise, Danielle

    2017-01-01

    The literature highlights the benefits of text-to-speech (TTS) software when used as an assistive technology facilitating struggling readers' access to print. However, the effects of TTS software use, upon students' unassisted reading proficiency, have remained relatively unexplored. The researchers utilized an experimental design to investigate whether 9th grade struggling readers who use TTS software to read course materials demonstrate significant improvements in unassisted reading performance. A total of 164 students of 30 teachers in Hawaii participated in the study. Analyses of covariance results indicated that the TTS intervention had a significant, positive effect on student reading vocabulary and reading comprehension after 10 weeks of TTS software use (average 582 minutes). There are several limitations to the study; however, the current study opens up for discussions and need for further studies investigating TTS software as a viable reading intervention for adolescent struggling readers.

  1. Issues and approaches for electronic document approval and transmittal using digital signatures and text authentication: Prototype documentation

    NASA Astrophysics Data System (ADS)

    Boling, M. E.

    1989-09-01

    Prototypes were assembled pursuant to recommendations made in report K/DSRD-96, Issues and Approaches for Electronic Document Approval and Transmittal Using Digital Signatures and Text Authentication, and to examine and discover the possibilities for integrating available hardware and software to provide cost effective systems for digital signatures and text authentication. These prototypes show that on a LAN, a multitasking, windowed, mouse/keyboard menu-driven interface can be assembled to provide easy and quick access to bit-mapped images of documents, electronic forms and electronic mail messages with a means to sign, encrypt, deliver, receive or retrieve and authenticate text and signatures. In addition they show that some of this same software may be used in a classified environment using host to terminal transactions to accomplish these same operations. Finally, a prototype was developed demonstrating that binary files may be signed electronically and sent by point to point communication and over ARPANET to remote locations where the authenticity of the code and signature may be verified. Related studies on the subject of electronic signatures and text authentication using public key encryption were done within the Department of Energy. These studies include timing studies of public key encryption software and hardware and testing of experimental user-generated host resident software for public key encryption. This software used commercially available command-line source code. These studies are responsive to an initiative within the Office of the Secretary of Defense (OSD) for the protection of unclassified but sensitive data. It is notable that these related studies are all built around the same commercially available public key encryption products from the private sector and that the software selection was made independently by each study group.

  2. Measurement of the edge plasma rotation on J-TEXT tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Z. F.; Luo, J.; Wang, Z. J.

    2013-07-15

    A multi-channel high resolution spectrometer was developed for the measurement of the edge plasma rotation on J-TEXT tokamak. With the design of two opposite viewing directions, the poloidal and toroidal rotations can be measured simultaneously, and velocity accuracy is up to 1 km/s. The photon flux was enhanced by utilizing combined optical fiber. With this design, the time resolution reaches 3 ms. An assistant software “Spectra Assist” was developed for implementing the spectrometer control and data analysis automatically. A multi-channel monochromatic analyzer is designed to get the location of chosen ions simultaneously through the inversion analysis. Some preliminary experimental resultsmore » about influence of plasma density, different magnetohydrodynamics behaviors, and applying of biased electrode are presented.« less

  3. Unobtrusive Monitoring of Spaceflight Team Functioning. Literature Review and Operational Assessment for NASA Behavioral Health and Performance Element

    NASA Technical Reports Server (NTRS)

    Maidel, Veronica; Stanton, Jeffrey M.

    2010-01-01

    This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.

  4. Biochemia Medica has started using the CrossCheck plagiarism detection software powered by iThenticate

    PubMed Central

    Šupak-Smolčić, Vesna; Šimundić, Ana-Maria

    2013-01-01

    In February 2013, Biochemia Medica has joined CrossRef, which enabled us to implement CrossCheck plagiarism detection service. Therefore, all manuscript submitted to Biochemia Medica are now first assigned to Research integrity editor (RIE), before sending the manuscript for peer-review. RIE submits the text to CrossCheck analysis and is responsible for reviewing the results of the text similarity analysis. Based on the CrossCheck analysis results, RIE subsequently provides a recommendation to the Editor-in-chief (EIC) on whether the manuscript should be forwarded to peer-review, corrected for suspected parts prior to peer-review or immediately rejected. Final decision on the manuscript is, however, with the EIC. We hope that our new policy and manuscript processing algorithm will help us to further increase the overall quality of our Journal. PMID:23894858

  5. OSCAR4: a flexible architecture for chemical text-mining

    PubMed Central

    2011-01-01

    The Open-Source Chemistry Analysis Routines (OSCAR) software, a toolkit for the recognition of named entities and data in chemistry publications, has been developed since 2002. Recent work has resulted in the separation of the core OSCAR functionality and its release as the OSCAR4 library. This library features a modular API (based on reduction of surface coupling) that permits client programmers to easily incorporate it into external applications. OSCAR4 offers a domain-independent architecture upon which chemistry specific text-mining tools can be built, and its development and usage are discussed. PMID:21999457

  6. EEGgui: a program used to detect electroencephalogram anomalies after traumatic brain injury.

    PubMed

    Sick, Justin; Bray, Eric; Bregy, Amade; Dietrich, W Dalton; Bramlett, Helen M; Sick, Thomas

    2013-05-21

    Identifying and quantifying pathological changes in brain electrical activity is important for investigations of brain injury and neurological disease. An example is the development of epilepsy, a secondary consequence of traumatic brain injury. While certain epileptiform events can be identified visually from electroencephalographic (EEG) or electrocorticographic (ECoG) records, quantification of these pathological events has proved to be more difficult. In this study we developed MATLAB-based software that would assist detection of pathological brain electrical activity following traumatic brain injury (TBI) and present our MATLAB code used for the analysis of the ECoG. Software was developed using MATLAB(™) and features of the open access EEGLAB. EEGgui is a graphical user interface in the MATLAB programming platform that allows scientists who are not proficient in computer programming to perform a number of elaborate analyses on ECoG signals. The different analyses include Power Spectral Density (PSD), Short Time Fourier analysis and Spectral Entropy (SE). ECoG records used for demonstration of this software were derived from rats that had undergone traumatic brain injury one year earlier. The software provided in this report provides a graphical user interface for displaying ECoG activity and calculating normalized power density using fast fourier transform of the major brain wave frequencies (Delta, Theta, Alpha, Beta1, Beta2 and Gamma). The software further detects events in which power density for these frequency bands exceeds normal ECoG by more than 4 standard deviations. We found that epileptic events could be identified and distinguished from a variety of ECoG phenomena associated with normal changes in behavior. We further found that analysis of spectral entropy was less effective in distinguishing epileptic from normal changes in ECoG activity. The software presented here was a successful modification of EEGLAB in the Matlab environment that allows detection of epileptiform ECoG signals in animals after TBI. The code allows import of large EEG or ECoG data records as standard text files and uses fast fourier transform as a basis for detection of abnormal events. The software can also be used to monitor injury-induced changes in spectral entropy if required. We hope that the software will be useful for other investigators in the field of traumatic brain injury and will stimulate future advances of quantitative analysis of brain electrical activity after neurological injury or disease.

  7. Human vs. Computer Diagnosis of Students' Natural Selection Knowledge: Testing the Efficacy of Text Analytic Software

    NASA Astrophysics Data System (ADS)

    Nehm, Ross H.; Haertig, Hendrik

    2012-02-01

    Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with equal fidelity as expert human scorers in a sample of >1,000 essays. We used SPSS Text Analysis 3.0 to perform our CAS and measure Kappa values (inter-rater reliability) of KC detection (i.e., computer-human rating correspondence). Our first analysis indicated that the text analysis functions (or extraction rules) developed and deployed in SPSSTA to extract individual Key Concepts (KCs) from three different items differing in several surface features (e.g., taxon, trait, type of evolutionary change) produced "substantial" (Kappa 0.61-0.80) or "almost perfect" (0.81-1.00) agreement. The second analysis explored the measurement of human-computer correspondence for KC diversity (the number of different accurate knowledge elements) in the combined sample of all 827 essays. Here we found outstanding correspondence; extraction rules generated using one prompt type are broadly applicable to other evolutionary scenarios (e.g., bacterial resistance, cheetah running speed, etc.). This result is encouraging, as it suggests that the development of new item sets may not necessitate the development of new text analysis rules. Overall, our findings suggest that CAS tools such as SPSS Text Analysis may compensate for some of the intrinsic limitations of currently used multiple-choice Concept Inventories designed to measure student knowledge of natural selection.

  8. Do We Still Need Controlled Vocabulary? Of Course, We Do! But How Do We Get It: The Roles for Text Analysis Softwares.

    ERIC Educational Resources Information Center

    Greenfield, Rich

    The author argues that traditional library cataloging (MARC) and the online public access catalog (OPAC) are in collision with the world of the Internet because items in electronic formats undergo MARC cataloging only on a very selective basis. Also the library profession initially isolated itself from World Wide Web development by predicting no…

  9. Software Reviews: "Pow! Zap! Ker-plunk! The Comic Book Maker" (Pelican Software).

    ERIC Educational Resources Information Center

    Porter, Bernajean

    1990-01-01

    Reviews the newest addition to Pelican's Creative Writing Series of instructional software, which uses the comic book format to provide a unique writing environment for satire, symbolism, sequencing, and combining text and graphics to communicate ideas. (SR)

  10. 10 CFR 961.11 - Text of the contract.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... program including information on cost projections, project plans and progress reports. 5. (a) Beginning on...-type documents or computer software (including computer programs, computer software data bases, and computer software documentation). Examples of technical data include research and engineering data...

  11. 10 CFR 961.11 - Text of the contract.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... program including information on cost projections, project plans and progress reports. 5. (a) Beginning on...-type documents or computer software (including computer programs, computer software data bases, and computer software documentation). Examples of technical data include research and engineering data...

  12. Integrating Text-to-Speech Software into Pedagogically Sound Teaching and Learning Scenarios

    ERIC Educational Resources Information Center

    Rughooputh, S. D. D. V.; Santally, M. I.

    2009-01-01

    This paper presents a new technique of delivery of classes--an instructional technique which will no doubt revolutionize the teaching and learning, whether for on-campus, blended or online modules. This is based on the simple task of instructionally incorporating text-to-speech software embedded in the lecture slides that will simulate exactly the…

  13. The Effect of Software Reusability on Information Theory Based Software Metrics

    DTIC Science & Technology

    1990-01-01

    of plans across programming languages and application areas, only a brief abstract treatment of non-contiguous "program parts" is mentioned in the...info->num = linenum; CA6 if(*info->text) W. if(find(linenum)) C.8 patchup(linenum, 1); /*fix up old line numbers*/ 107 C.9 if(*info->text) C-10 start

  14. Off-the-shelf real-time monitoring of satellite constellations in a visual 3-D environment

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Hervias, Felipe; Cheng, Cecilia Han; Mactutis, Anthony; Angelino, Robert

    1996-01-01

    The multimission spacecraft analysis system (MSAS) data monitor is a generic software product for future real-time data monitoring and analysis. The system represents the status of a satellite constellation through the shape, color, motion and position of graphical objects floating in a three dimensional virtual reality environment. It may be used for the monitoring of large volumes of data, for viewing results in configurable displays, and for providing high level and detailed views of a constellation of monitored satellites. It is considered that the data monitor is an improvement on conventional graphic and text-based displays as it increases the amount of data that the operator can absorb in a given period, and can be installed and configured without the requirement for software development by the end user. The functionality of the system is described, including: the navigation abilities; the representation of alarms in the cybergrid; limit violation; real-time trend analysis, and alarm status indication.

  15. Identification of misspelled words without a comprehensive dictionary using prevalence analysis.

    PubMed

    Turchin, Alexander; Chu, Julia T; Shubina, Maria; Einbinder, Jonathan S

    2007-10-11

    Misspellings are common in medical documents and can be an obstacle to information retrieval. We evaluated an algorithm to identify misspelled words through analysis of their prevalence in a representative body of text. We evaluated the algorithm's accuracy of identifying misspellings of 200 anti-hypertensive medication names on 2,000 potentially misspelled words randomly selected from narrative medical documents. Prevalence ratios (the frequency of the potentially misspelled word divided by the frequency of the non-misspelled word) in physician notes were computed by the software for each of the words. The software results were compared to the manual assessment by an independent reviewer. Area under the ROC curve for identification of misspelled words was 0.96. Sensitivity, specificity, and positive predictive value were 99.25%, 89.72% and 82.9% for the prevalence ratio threshold (0.32768) with the highest F-measure (0.903). Prevalence analysis can be used to identify and correct misspellings with high accuracy.

  16. Telemetry Monitoring and Display Using LabVIEW

    NASA Technical Reports Server (NTRS)

    Wells, George; Baroth, Edmund C.

    1993-01-01

    The Measurement Technology Center of the Instrumentation Section configures automated data acquisition systems to meet the diverse needs of JPL's experimental research community. These systems are based on personal computers or workstations (Apple, IBM/Compatible, Hewlett-Packard, and Sun Microsystems) and often include integrated data analysis, visualization and experiment control functions in addition to data acquisition capabilities. These integrated systems may include sensors, signal conditioning, data acquisition interface cards, software, and a user interface. Graphical programming is used to simplify configuration of such systems. Employment of a graphical programming language is the most important factor in enabling the implementation of data acquisition, analysis, display and visualization systems at low cost. Other important factors are the use of commercial software packages and off-the-shelf data acquisition hardware where possible. Understanding the experimenter's needs is also critical. An interactive approach to user interface construction and training of operators is also important. One application was created as a result of a competative effort between a graphical programming language team and a text-based C language programming team to verify the advantages of using a graphical programming language approach. With approximately eight weeks of funding over a period of three months, the text-based programming team accomplished about 10% of the basic requirements, while the Macintosh/LabVIEW team accomplished about 150%, having gone beyond the original requirements to simulate a telemetry stream and provide utility programs. This application verified that using graphical programming can significantly reduce software development time. As a result of this initial effort, additional follow-on work was awarded to the graphical programming team.

  17. Integrated Software for Analyzing Designs of Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Philips, Alan D.

    2003-01-01

    Launch Vehicle Analysis Tool (LVA) is a computer program for preliminary design structural analysis of launch vehicles. Before LVA was developed, in order to analyze the structure of a launch vehicle, it was necessary to estimate its weight, feed this estimate into a program to obtain pre-launch and flight loads, then feed these loads into structural and thermal analysis programs to obtain a second weight estimate. If the first and second weight estimates differed, it was necessary to reiterate these analyses until the solution converged. This process generally took six to twelve person-months of effort. LVA incorporates text to structural layout converter, configuration drawing, mass properties generation, pre-launch and flight loads analysis, loads output plotting, direct solution structural analysis, and thermal analysis subprograms. These subprograms are integrated in LVA so that solutions can be iterated automatically. LVA incorporates expert-system software that makes fundamental design decisions without intervention by the user. It also includes unique algorithms based on extensive research. The total integration of analysis modules drastically reduces the need for interaction with the user. A typical solution can be obtained in 30 to 60 minutes. Subsequent runs can be done in less than two minutes.

  18. 48 CFR 252.234-7004 - Cost and Software Data Reporting System. (NOV 2010)

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Cost and Software Data... AND CONTRACT CLAUSES Text of Provisions And Clauses 252.234-7004 Cost and Software Data Reporting System. (NOV 2010) As prescribed in 234.7101(b)(1), use the following clause: Cost and Software Data...

  19. 48 CFR 252.234-7004 - Cost and Software Data Reporting System. (NOV 2010)

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Cost and Software Data... AND CONTRACT CLAUSES Text of Provisions And Clauses 252.234-7004 Cost and Software Data Reporting System. (NOV 2010) As prescribed in 234.7101(b)(1), use the following clause: Cost and Software Data...

  20. 48 CFR 252.234-7004 - Cost and Software Data Reporting System. (NOV 2010)

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Cost and Software Data... AND CONTRACT CLAUSES Text of Provisions And Clauses 252.234-7004 Cost and Software Data Reporting System. (NOV 2010) As prescribed in 234.7101(b)(1), use the following clause: COST AND SOFTWARE DATA...

  1. 48 CFR 252.234-7004 - Cost and Software Data Reporting System. (NOV 2010)

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Cost and Software Data... AND CONTRACT CLAUSES Text of Provisions And Clauses 252.234-7004 Cost and Software Data Reporting System. (NOV 2010) As prescribed in 234.7101(b)(1), use the following clause: Cost and Software Data...

  2. PROVAT: a tool for Voronoi tessellation analysis of protein structures and complexes.

    PubMed

    Gore, Swanand P; Burke, David F; Blundell, Tom L

    2005-08-01

    Voronoi tessellation has proved to be a useful tool in protein structure analysis. We have developed PROVAT, a versatile public domain software that enables computation and visualization of Voronoi tessellations of proteins and protein complexes. It is a set of Python scripts that integrate freely available specialized software (Qhull, Pymol etc.) into a pipeline. The calculation component of the tool computes Voronoi tessellation of a given protein system in a way described by a user-supplied XML recipe and stores resulting neighbourhood information as text files with various styles. The Python pickle file generated in the process is used by the visualization component, a Pymol plug-in, that offers a GUI to explore the tessellation visually. PROVAT source code can be downloaded from http://raven.bioc.cam.ac.uk/~swanand/Provat1, which also provides a webserver for its calculation component, documentation and examples.

  3. LD2SNPing: linkage disequilibrium plotter and RFLP enzyme mining for tag SNPs

    PubMed Central

    Chang, Hsueh-Wei; Chuang, Li-Yeh; Chang, Yan-Jhu; Cheng, Yu-Huei; Hung, Yu-Chen; Chen, Hsiang-Chi; Yang, Cheng-Hong

    2009-01-01

    Background Linkage disequilibrium (LD) mapping is commonly used to evaluate markers for genome-wide association studies. Most types of LD software focus strictly on LD analysis and visualization, but lack supporting services for genotyping. Results We developed a freeware called LD2SNPing, which provides a complete package of mining tools for genotyping and LD analysis environments. The software provides SNP ID- and gene-centric online retrievals for SNP information and tag SNP selection from dbSNP/NCBI and HapMap, respectively. Restriction fragment length polymorphism (RFLP) enzyme information for SNP genotype is available to all SNP IDs and tag SNPs. Single and multiple SNP inputs are possible in order to perform LD analysis by online retrieval from HapMap and NCBI. An LD statistics section provides D, D', r2, δQ, ρ, and the P values of the Hardy-Weinberg Equilibrium for each SNP marker, and Chi-square and likelihood-ratio tests for the pair-wise association of two SNPs in LD calculation. Finally, 2D and 3D plots, as well as plain-text output of the results, can be selected. Conclusion LD2SNPing thus provides a novel visualization environment for multiple SNP input, which facilitates SNP association studies. The software, user manual, and tutorial are freely available at . PMID:19500380

  4. Image-based mobile service: automatic text extraction and translation

    NASA Astrophysics Data System (ADS)

    Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.

    2010-01-01

    We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.

  5. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection

    PubMed Central

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-01-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489

  6. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection.

    PubMed

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-12-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.

  7. The effect of word prediction settings (frequency of use) on text input speed in persons with cervical spinal cord injury: a prospective study.

    PubMed

    Pouplin, Samuel; Roche, Nicolas; Antoine, Jean-Yves; Vaugier, Isabelle; Pottier, Sandra; Figere, Marjorie; Bensmail, Djamel

    2017-06-01

    To determine whether activation of the frequency of use and automatic learning parameters of word prediction software has an impact on text input speed. Forty-five participants with cervical spinal cord injury between C4 and C8 Asia A or B accepted to participate to this study. Participants were separated in two groups: a high lesion group for participants with lesion level is at or above C5 Asia AIS A or B and a low lesion group for participants with lesion is between C6 and C8 Asia AIS A or B. A single evaluation session was carried out for each participant. Text input speed was evaluated during three copying tasks: • without word prediction software (WITHOUT condition) • with automatic learning of words and frequency of use deactivated (NOT_ACTIV condition) • with automatic learning of words and frequency of use activated (ACTIV condition) Results: Text input speed was significantly higher in the WITHOUT than the NOT_ACTIV (p< 0.001) or ACTIV conditions (p = 0.02) for participants with low lesions. Text input speed was significantly higher in the ACTIV than in the NOT_ACTIV (p = 0.002) or WITHOUT (p < 0.001) conditions for participants with high lesions. Use of word prediction software with the activation of frequency of use and automatic learning increased text input speed in participants with high-level tetraplegia. For participants with low-level tetraplegia, the use of word prediction software with frequency of use and automatic learning activated only decreased the number of errors. Implications in rehabilitation   Access to technology can be difficult for persons with disabilities such as cervical spinal cord injury (SCI). Several methods have been developed to increase text input speed such as word prediction software.This study show that parameter of word prediction software (frequency of use) affected text input speed in persons with cervical SCI and differed according to the level of the lesion. • For persons with high-level lesion, our results suggest that this parameter must be activated so that text input speed is increased. • For persons with low lesion group, this parameter must be activated so that the numbers of errors are decreased. • In all cases, the activation of the parameter of frequency of use is essential in order to improve the efficiency of the word prediction software. • Health-related professionals should use these results in their clinical practice for better results and therefore better patients 'satisfaction.

  8. Public reactions to e-cigarette regulations on Twitter: a text mining analysis.

    PubMed

    Lazard, Allison J; Wilcox, Gary B; Tuttle, Hannah M; Glowacki, Elizabeth M; Pikowski, Jessica

    2017-12-01

    In May 2016, the Food and Drug Administration (FDA) issued a final rule that deemed e-cigarettes to be within their regulatory authority as a tobacco product. News and opinions about the regulation were shared on social media platforms, such as Twitter, which can play an important role in shaping the public's attitudes. We analysed information shared on Twitter for insights into initial public reactions. A text mining approach was used to uncover important topics among reactions to the e-cigarette regulations on Twitter. SAS Text Miner V.12.1 software was used for descriptive text mining to uncover the primary topics from tweets collected from May 1 to May 17 2016 using NUVI software to gather the data. A total of nine topics were generated. These topics reveal initial reactions to whether the FDA's e-cigarette regulations will benefit or harm public health, how the regulations will impact the emerging e-cigarette market and efforts to share the news. The topics were dominated by negative or mixed reactions. In the days following the FDA's announcement of the new deeming regulations, the public reaction on Twitter was largely negative. Public health advocates should consider using social media outlets to better communicate the policy's intentions, reach and potential impact for public good to create a more balanced conversation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Streamlined sign-out of capillary protein electrophoresis using middleware and an open-source macro application.

    PubMed

    Mathur, Gagan; Haugen, Thomas H; Davis, Scott L; Krasowski, Matthew D

    2014-01-01

    Interfacing of clinical laboratory instruments with the laboratory information system (LIS) via "middleware" software is increasingly common. Our clinical laboratory implemented capillary electrophoresis using a Sebia(®) Capillarys-2™ (Norcross, GA, USA) instrument for serum and urine protein electrophoresis. Using Data Innovations Instrument Manager, an interface was established with the LIS (Cerner) that allowed for bi-directional transmission of numeric data. However, the text of the interpretive pathology report was not properly transferred. To reduce manual effort and possibility for error in text data transfer, we developed scripts in AutoHotkey, a free, open-source macro-creation and automation software utility. Scripts were written to create macros that automated mouse and key strokes. The scripts retrieve the specimen accession number, capture user input text, and insert the text interpretation in the correct patient record in the desired format. The scripts accurately and precisely transfer narrative interpretation into the LIS. Combined with bar-code reading by the electrophoresis instrument, the scripts transfer data efficiently to the correct patient record. In addition, the AutoHotKey script automated repetitive key strokes required for manual entry into the LIS, making protein electrophoresis sign-out easier to learn and faster to use by the pathology residents. Scripts allow for either preliminary verification by residents or final sign-out by the attending pathologist. Using the open-source AutoHotKey software, we successfully improved the transfer of text data between capillary electrophoresis software and the LIS. The use of open-source software tools should not be overlooked as tools to improve interfacing of laboratory instruments.

  10. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  11. Orthographic Learning and the Role of Text-to-Speech Software in Dutch Disabled Readers

    ERIC Educational Resources Information Center

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating Share's…

  12. Comparing the Impact of Rates of Text-to-Speech Software on Reading Fluency and Comprehension for Adults with Reading Difficulties

    ERIC Educational Resources Information Center

    Coleman, Mari Beth; Killdare, Laura K.; Bell, Sherry Mee; Carter, Amanda M.

    2014-01-01

    The purpose of this study was to determine the impact of text-to-speech software on reading fluency and comprehension for four postsecondary students with below average reading fluency and comprehension including three students diagnosed with learning disabilities and concomitant conditions (e.g., attention deficit hyperactivity disorder, seizure…

  13. A multilingual audiometer simulator software for training purposes.

    PubMed

    Kompis, Martin; Steffen, Pascal; Caversaccio, Marco; Brugger, Urs; Oesch, Ivo

    2012-04-01

    A set of algorithms, which allows a computer to determine the answers of simulated patients during pure tone and speech audiometry, is presented. Based on these algorithms, a computer program for training in audiometry was written and found to be useful for teaching purposes. To develop a flexible audiometer simulator software as a teaching and training tool for pure tone and speech audiometry, both with and without masking. First a set of algorithms, which allows a computer to determine the answers of a simulated, hearing-impaired patient, was developed. Then, the software was implemented. Extensive use was made of simple, editable text files to define all texts in the user interface and all patient definitions. The software 'audiometer simulator' is available for free download. It can be used to train pure tone audiometry (both with and without masking), speech audiometry, measurement of the uncomfortable level, and simple simulation tests. Due to the use of text files, the user can alter or add patient definitions and all texts and labels shown on the screen. So far, English, French, German, and Portuguese user interfaces are available and the user can choose between German or French speech audiometry.

  14. SPICODYN: A Toolbox for the Analysis of Neuronal Network Dynamics and Connectivity from Multi-Site Spike Signal Recordings.

    PubMed

    Pastore, Vito Paolo; Godjoski, Aleksandar; Martinoia, Sergio; Massobrio, Paolo

    2018-01-01

    We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.

  15. [Pilot study of domain-specific terminology adaptation for morphological analysis: research on unknown terms in national examination documents of radiological technologists].

    PubMed

    Tsuji, Shintarou; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2008-07-20

    Although large medical texts are stored in electronic format, they are seldom reused because of the difficulty of processing narrative texts by computer. Morphological analysis is a key technology for extracting medical terms correctly and automatically. This process parses a sentence into its smallest unit, the morpheme. Phrases consisting of two or more technical terms, however, cause morphological analysis software to fail in parsing the sentence and output unprocessed terms as "unknown words." The purpose of this study was to reduce the number of unknown words in medical narrative text processing. The results of parsing the text with additional dictionaries were compared with the analysis of the number of unknown words in the national examination for radiologists. The ratio of unknown words was reduced 1.0% to 0.36% by adding terminologies of radiological technology, MeSH, and ICD-10 labels. The terminology of radiological technology was the most effective resource, being reduced by 0.62%. This result clearly showed the necessity of additional dictionary selection and trends in unknown words. The potential for this investigation is to make available a large body of clinical information that would otherwise be inaccessible for applications other than manual health care review by personnel.

  16. Onboard shuttle on-line software requirements system: Prototype

    NASA Technical Reports Server (NTRS)

    Kolkhorst, Barbara; Ogletree, Barry

    1989-01-01

    The prototype discussed here was developed as proof of a concept for a system which could support high volumes of requirements documents with integrated text and graphics; the solution proposed here could be extended to other projects whose goal is to place paper documents in an electronic system for viewing and printing purposes. The technical problems (such as conversion of documentation between word processors, management of a variety of graphics file formats, and difficulties involved in scanning integrated text and graphics) would be very similar for other systems of this type. Indeed, technological advances in areas such as scanning hardware and software and display terminals insure that some of the problems encountered here will be solved in the near-term (less than five years). Examples of these solvable problems include automated input of integrated text and graphics, errors in the recognition process, and the loss of image information which results from the digitization process. The solution developed for the Online Software Requirements System is modular and allows hardware and software components to be upgraded or replaced as industry solutions mature. The extensive commercial software content allows the NASA customer to apply resources to solving the problem and maintaining documents.

  17. KMWin--a convenient tool for graphical presentation of results from Kaplan-Meier survival time analysis.

    PubMed

    Gross, Arnd; Ziepert, Marita; Scholz, Markus

    2012-01-01

    Analysis of clinical studies often necessitates multiple graphical representations of the results. Many professional software packages are available for this purpose. Most packages are either only commercially available or hard to use especially if one aims to generate or customize a huge number of similar graphical outputs. We developed a new, freely available software tool called KMWin (Kaplan-Meier for Windows) facilitating Kaplan-Meier survival time analysis. KMWin is based on the statistical software environment R and provides an easy to use graphical interface. Survival time data can be supplied as SPSS (sav), SAS export (xpt) or text file (dat), which is also a common export format of other applications such as Excel. Figures can directly be exported in any graphical file format supported by R. On the basis of a working example, we demonstrate how to use KMWin and present its main functions. We show how to control the interface, customize the graphical output, and analyse survival time data. A number of comparisons are performed between KMWin and SPSS regarding graphical output, statistical output, data management and development. Although the general functionality of SPSS is larger, KMWin comprises a number of features useful for survival time analysis in clinical trials and other applications. These are for example number of cases and number of cases under risk within the figure or provision of a queue system for repetitive analyses of updated data sets. Moreover, major adjustments of graphical settings can be performed easily on a single window. We conclude that our tool is well suited and convenient for repetitive analyses of survival time data. It can be used by non-statisticians and provides often used functions as well as functions which are not supplied by standard software packages. The software is routinely applied in several clinical study groups.

  18. KMWin – A Convenient Tool for Graphical Presentation of Results from Kaplan-Meier Survival Time Analysis

    PubMed Central

    Gross, Arnd; Ziepert, Marita; Scholz, Markus

    2012-01-01

    Background Analysis of clinical studies often necessitates multiple graphical representations of the results. Many professional software packages are available for this purpose. Most packages are either only commercially available or hard to use especially if one aims to generate or customize a huge number of similar graphical outputs. We developed a new, freely available software tool called KMWin (Kaplan-Meier for Windows) facilitating Kaplan-Meier survival time analysis. KMWin is based on the statistical software environment R and provides an easy to use graphical interface. Survival time data can be supplied as SPSS (sav), SAS export (xpt) or text file (dat), which is also a common export format of other applications such as Excel. Figures can directly be exported in any graphical file format supported by R. Results On the basis of a working example, we demonstrate how to use KMWin and present its main functions. We show how to control the interface, customize the graphical output, and analyse survival time data. A number of comparisons are performed between KMWin and SPSS regarding graphical output, statistical output, data management and development. Although the general functionality of SPSS is larger, KMWin comprises a number of features useful for survival time analysis in clinical trials and other applications. These are for example number of cases and number of cases under risk within the figure or provision of a queue system for repetitive analyses of updated data sets. Moreover, major adjustments of graphical settings can be performed easily on a single window. Conclusions We conclude that our tool is well suited and convenient for repetitive analyses of survival time data. It can be used by non-statisticians and provides often used functions as well as functions which are not supplied by standard software packages. The software is routinely applied in several clinical study groups. PMID:22723912

  19. Ten years of science news: A longitudinal analysis of scientific culture in the Spanish digital press.

    PubMed

    Groves, Tamar; Figuerola, Carlos G; Quintanilla, Miguel Á

    2016-08-01

    This article presents our study of science coverage in the digital Spanish press over the last decade. We employed automated information retrieval procedures to create a corpus of 50,763 text units dealing with science and technology, and used automated text-analysis procedures in order to provide a general picture of the structure, characteristics and evolution of science news in Spain. We found between 6% and 7% of science coverage, a clear high proportion of biomedicine and predominance of science over technology, although we also detected an increase in technological content during the second half of the decade. Analysing the extrinsic and intrinsic features of science culture, we found a predominance of intrinsic features that still need further analysis. Our attempt to use specialised software to examine big data was effective, and allowed us to reach these preliminary conclusions. © The Author(s) 2015.

  20. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  1. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  2. Seven Steps to Responsible Software Selection. ERIC Digest.

    ERIC Educational Resources Information Center

    Komoski, P. Kenneth; Plotnick, Eric

    Microcomputers in schools contribute significantly to the learning process, and software selection is taken as seriously as the selection of text books. The seven step process for responsible software selection are: (1) analyzing needs, including the differentiation between needs and objectives; (2) specification of requirements; (3) identifying…

  3. Orthographic learning and the role of text-to-speech software in Dutch disabled readers.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating Share's self-teaching paradigm. A total of 65 disabled Dutch readers were asked to read eight stories containing embedded homophonic pseudoword targets (e.g., Blot/Blod), with or without the support of text-to-speech software. The amount of orthographic learning was assessed 3 or 7 days later by three measures of orthographic learning. First, the results supported the presence of orthographic learning during independent silent reading by demonstrating that target spellings were correctly identified more often, named more quickly, and spelled more accurately than their homophone foils. Our results support the hypothesis that all readers, even poor readers of transparent orthographies, are capable of developing word-specific knowledge. Second, a negative effect of text-to-speech software on orthographic learning was demonstrated in this study. This negative effect was interpreted as the consequence of passively listening to the auditory presentation of the text. We clarify how these results can be interpreted within current theoretical accounts of orthographic learning and briefly discuss implications for remedial interventions. © Hammill Institute on Disabilities 2013.

  4. Preparing Colorful Astronomical Images II

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  5. How to Lessen the Effects of User Resistance on the Adoption of an E-Learning Environment: Screenshot Annotation on Flickr

    ERIC Educational Resources Information Center

    Huang, T. K.

    2018-01-01

    The study makes use of the photo-hosting site, namely Flickr, for students to upload screenshots to demonstrate computer software problems and troubleshooting software. By creating non-text stickers and text-based annotations above the screenshots, students are able to help one another to diagnose and solve problems with greater certainty. In…

  6. Emerging Realities of Text-to-Speech Software for Nonnative-English-Speaking Community College Students in the Freshman Year

    ERIC Educational Resources Information Center

    Baker, Fiona S.

    2015-01-01

    This study explores the expectations and early and subsequent realities of text-to-speech software for 24 nonnative-English-speaking college students who were experiencing reading difficulties in their freshman year of college. The study took place over two semesters in one academic year (from September to June) at a community college on the…

  7. 10 CFR 961.11 - Text of the contract.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requires permanent isolation. 13. The term electricity (kilowatt hours) generated and sold means gross...-type documents or computer software (including computer programs, computer software data bases, and...

  8. "Software Tools" to Improve Student Writing.

    ERIC Educational Resources Information Center

    Oates, Rita Haugh

    1987-01-01

    Reviews several software packages that analyze text readability, check for spelling and style problems, offer desktop publishing capabilities, teach interviewing skills, and teach grammar using a computer game. (SRT)

  9. At the Creation: Chaos, Control, and Automation--Commercial Software Development for Archives.

    ERIC Educational Resources Information Center

    Drr, W. Theodore

    1988-01-01

    An approach to the design of flexible text-based management systems for archives includes tiers for repository, software, and user management systems. Each tier has four layers--objective, program, result, and interface. Traps awaiting software development companies involve the market, competition, operations, and finance. (10 references) (MES)

  10. Records Inventory Data Collection Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Brian A.

    1995-03-01

    DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products.

  11. Text Processing and Formatting: Composure, Composition and Eros.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1984-01-01

    Review of computer software offering text editing/processing capabilities highlights work habits, elements of computer style and composition, buffers, the CRT, line- and screen-oriented text editors, video attributes, "swapping,""cache" memory, "disk emulators," text editing versus text processing, and UNIX operating…

  12. Chapter 16: text mining for translational bioinformatics.

    PubMed

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  13. CytometryML binary data standards

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.

    2005-03-01

    CytometryML is a proposed new Analytical Cytology (Cytomics) data standard, which is based on a common set of XML schemas for encoding flow cytometry and digital microscopy text based data types (metadata). CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. The separation of the large binary data objects (list mode and image data) from the XML description of the metadata permits the metadata to be directly displayed, analyzed, and reported with standard commercial software packages; the direct use of XML languages; and direct interfacing with clinical information systems. The separation of the binary data into its own files simplifies parsing because all extraneous header data has been eliminated. The storage of images as two-dimensional arrays without any extraneous data, such as in the Adobe Photoshop RAW format, facilitates the development by scientists of their own analysis and visualization software. Adobe Photoshop provided the display infrastructure and the translation facility to interconvert between the image data from commercial formats and RAW format. Similarly, the storage and parsing of list mode binary data type with a group of parameters that are specified at compilation time is straight forward. However when the user is permitted at run-time to select a subset of the parameters and/or specify results of mathematical manipulations, the development of special software was required. The use of CytometryML will permit investigators to be able to create their own interoperable data analysis software and to employ commercially available software to disseminate their data.

  14. Data-Proximate Analysis and Visualization in the Cloud using Cloudstream, an Open-Source Application Streaming Technology Stack

    NASA Astrophysics Data System (ADS)

    Fisher, W. I.

    2017-12-01

    The rise in cloud computing, coupled with the growth of "Big Data", has lead to a migration away from local scientific data storage. The increasing size of remote scientific data sets increase, however, makes it difficult for scientists to subject them to large-scale analysis and visualization. These large datasets can take an inordinate amount of time to download; subsetting is a potential solution, but subsetting services are not yet ubiquitous. Data providers may also pay steep prices, as many cloud providers meter data based on how much data leaves their cloud service. The solution to this problem is a deceptively simple one; move data analysis and visualization tools to the cloud, so that scientists may perform data-proximate analysis and visualization. This results in increased transfer speeds, while egress costs are lowered or completely eliminated. Moving standard desktop analysis and visualization tools to the cloud is enabled via a technique called "Application Streaming". This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations. When coupled with containerization technology such as Docker, we are able to easily deploy legacy analysis and visualization software to the cloud whilst retaining access via a desktop, netbook, a smartphone, or the next generation of hardware, whatever it may be. Unidata has created a Docker-based solution for easily adapting legacy software for Application Streaming. This technology stack, dubbed Cloudstream, allows desktop software to run in the cloud with little-to-no effort. The docker container is configured by editing text files, and the legacy software does not need to be modified in any way. This work will discuss the underlying technologies used by Cloudstream, and outline how to use Cloudstream to run and access an existing desktop application to the cloud.

  15. The Study and Implementation of Text-to-Speech System for Agricultural Information

    NASA Astrophysics Data System (ADS)

    Zheng, Huoguo; Hu, Haiyan; Liu, Shihong; Meng, Hong

    The Broadcast and Television coverage has increased to more than 98% in china. Information services by radio have wide coverage, low cost, easy-to-grass-roots farmers to accept etc. characteristics. In order to play the better role of broadcast information service, as well as aim at the problem of lack of information resource in rural, we R & D the text-to-speech system. The system includes two parts, software and hardware device, both of them can translate text into audio file. The software subsystem was implemented basic on third-part middleware, and the hardware subsystem was realized with microelectronics technology. Results indicate that the hardware is better than software. The system has been applied in huailai city hebei province, which has conversed more than 8000 audio files as programming materials for the local radio station.

  16. Interactive text mining with Pipeline Pilot: a bibliographic web-based tool for PubMed.

    PubMed

    Vellay, S G P; Latimer, N E Miller; Paillard, G

    2009-06-01

    Text mining has become an integral part of all research in the medical field. Many text analysis software platforms support particular use cases and only those. We show an example of a bibliographic tool that can be used to support virtually any use case in an agile manner. Here we focus on a Pipeline Pilot web-based application that interactively analyzes and reports on PubMed search results. This will be of interest to any scientist to help identify the most relevant papers in a topical area more quickly and to evaluate the results of query refinement. Links with Entrez databases help both the biologist and the chemist alike. We illustrate this application with Leishmaniasis, a neglected tropical disease, as a case study.

  17. CrossTalk: The Journal of Defense Software Engineering. Volume 23, Number 2, March/April 2010

    DTIC Science & Technology

    2010-04-01

    SDLC phase. 4. Developing secure software depends on understanding the operational con- text in which it will be used. This con- text includes... its development . BSI leverages the Common Weakness Enumeration (CWE) and the Common Attack Pattern Enumeration and Classification (CAPEC) efforts. To...system integrators providing sys- tems (both IT and warfighting) to the Concept Refinement Technology Development System Development and

  18. The NASA Astrophysics Data System joins the Revolution

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; Holachek, Alexandra; Sudilovsky, Vladimir; Elliott, Jonathan; Murray, Stephen S.

    2015-08-01

    Whether or not scholarly publications are going through an evolution or revolution, one comforting certainty remains: the NASA Astrophysics Data System (ADS) is here to help the working astronomer and librarian navigate through the increasingly complex communication environment we find ourselves in. Born as a bibliographic database, today's ADS is best described as a an "aggregator" of scholarly resources relevant to the needs of researchers in astronomy and physics. In addition to indexing content from a variety of publishers, data and software archives, the ADS enriches its records by text-mining and indexing the full-text articles, enriching its metadata through the extraction of citations and acknowledgments and the ingest of bibliographies and data links maintained by astronomy institutions and data archives. In addition, ADS generates and maintains citation and co-readership networks to support discovery and bibliometric analysis.In this talk I will summarize new and ongoing curation activities and technology developments of the ADS in the face of the ever-changing world of scholarly publishing and the trends in information-sharing behavior of astronomers. Recent curation efforts include the indexing of non-standard scholarly content (such as software packages, IVOA documents and standards, and NASA award proposals); the indexing of additional content (full-text of articles, acknowledgments, affiliations, ORCID ids); and enhanced support for bibliographic groups and data links. Recent technology developments include a new Application Programming Interface which provides access to a variety of ADS microservices, a new user interface featuring a variety of visualizations and bibliometric analysis, and integration with ORCID services to support paper claiming.

  19. Exploratory analysis of textual data from the Mother and Child Handbook using a text mining method (II): Monthly changes in the words recorded by mothers.

    PubMed

    Tagawa, Miki; Matsuda, Yoshio; Manaka, Tomoko; Kobayashi, Makiko; Ohwada, Michitaka; Matsubara, Shigeki

    2017-01-01

    The aim of the study was to examine the possibility of converting subjective textual data written in the free column space of the Mother and Child Handbook (MCH) into objective information using text mining and to compare any monthly changes in the words written by the mothers. Pregnant women without complications (n = 60) were divided into two groups according to State-Trait Anxiety Inventory grade: low trait anxiety (group I, n = 39) and high trait anxiety (group II, n = 21). Exploratory analysis of the textual data from the MCH was conducted by text mining using the Word Miner software program. Using 1203 structural elements extracted after processing, a comparison of monthly changes in the words used in the mothers' comments was made between the two groups. The data was mainly analyzed by a correspondence analysis. The structural elements in groups I and II were divided into seven and six clusters, respectively, by cluster analysis. Correspondence analysis revealed clear monthly changes in the words used in the mothers' comments as the pregnancy progressed in group I, whereas the association was not clear in group II. The text mining method was useful for exploratory analysis of the textual data obtained from pregnant women, and the monthly change in the words used in the mothers' comments as pregnancy progressed differed according to their degree of unease. © 2016 Japan Society of Obstetrics and Gynecology.

  20. ProtocolNavigator: emulation-based software for the design, documentation and reproduction biological experiments.

    PubMed

    Khan, Imtiaz A; Fraser, Adam; Bray, Mark-Anthony; Smith, Paul J; White, Nick S; Carpenter, Anne E; Errington, Rachel J

    2014-12-01

    Experimental reproducibility is fundamental to the progress of science. Irreproducible research decreases the efficiency of basic biological research and drug discovery and impedes experimental data reuse. A major contributing factor to irreproducibility is difficulty in interpreting complex experimental methodologies and designs from written text and in assessing variations among different experiments. Current bioinformatics initiatives either are focused on computational research reproducibility (i.e. data analysis) or laboratory information management systems. Here, we present a software tool, ProtocolNavigator, which addresses the largely overlooked challenges of interpretation and assessment. It provides a biologist-friendly open-source emulation-based tool for designing, documenting and reproducing biological experiments. ProtocolNavigator was implemented in Python 2.7, using the wx module to build the graphical user interface. It is a platform-independent software and freely available from http://protocolnavigator.org/index.html under the GPL v2 license. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. DATALINK. Records Inventory Data Collection Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, B.

    1995-03-01

    DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products.

  2. Exploratory analysis of textual data from the Mother and Child Handbook using the text-mining method: Relationships with maternal traits and post-partum depression.

    PubMed

    Matsuda, Yoshio; Manaka, Tomoko; Kobayashi, Makiko; Sato, Shuhei; Ohwada, Michitaka

    2016-06-01

    The aim of the present study was to examine the possibility of screening apprehensive pregnant women and mothers at risk for post-partum depression from an analysis of the textual data in the Mother and Child Handbook by using the text-mining method. Uncomplicated pregnant women (n = 58) were divided into two groups according to State-Trait Anxiety Inventory grade (high trait [group I, n = 21] and low trait [group II, n = 37]) or Edinburgh Postnatal Depression Scale score (high score [group III, n = 15] and low score [group IV, n = 43]). An exploratory analysis of the textual data from the Maternal and Child Handbook was conducted using the text-mining method with the Word Miner software program. A comparison of the 'structure elements' was made between the two groups. The number of structure elements extracted by separated words from text data was 20 004 and the number of structure elements with a threshold of 2 or more as an initial value was 1168. Fifteen key words related to maternal anxiety, and six key words related to post-partum depression were extracted. The text-mining method is useful for the exploratory analysis of textual data obtained from pregnant woman, and this screening method has been suggested to be useful for apprehensive pregnant women and mothers at risk for post-partum depression. © 2016 Japan Society of Obstetrics and Gynecology.

  3. A general concept for consistent documentation of computational analyses

    PubMed Central

    Müller, Fabian; Nordström, Karl; Lengauer, Thomas; Schulz, Marcel H.

    2015-01-01

    The ever-growing amount of data in the field of life sciences demands standardized ways of high-throughput computational analysis. This standardization requires a thorough documentation of each step in the computational analysis to enable researchers to understand and reproduce the results. However, due to the heterogeneity in software setups and the high rate of change during tool development, reproducibility is hard to achieve. One reason is that there is no common agreement in the research community on how to document computational studies. In many cases, simple flat files or other unstructured text documents are provided by researchers as documentation, which are often missing software dependencies, versions and sufficient documentation to understand the workflow and parameter settings. As a solution we suggest a simple and modest approach for documenting and verifying computational analysis pipelines. We propose a two-part scheme that defines a computational analysis using a Process and an Analysis metadata document, which jointly describe all necessary details to reproduce the results. In this design we separate the metadata specifying the process from the metadata describing an actual analysis run, thereby reducing the effort of manual documentation to an absolute minimum. Our approach is independent of a specific software environment, results in human readable XML documents that can easily be shared with other researchers and allows an automated validation to ensure consistency of the metadata. Because our approach has been designed with little to no assumptions concerning the workflow of an analysis, we expect it to be applicable in a wide range of computational research fields. Database URL: http://deep.mpi-inf.mpg.de/DAC/cmds/pub/pyvalid.zip PMID:26055099

  4. Orbiter subsystem hardware/software interaction analysis. Volume 8: AFT reaction control system, part 2

    NASA Technical Reports Server (NTRS)

    Becker, D. D.

    1980-01-01

    The orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are examined. Potential interaction with the software is examined through an evaluation of the software requirements. The analysis is restricted to flight software requirements and excludes utility/checkout software. The results of the hardware/software interaction analysis for the forward reaction control system are presented.

  5. Table-driven configuration and formatting of telemetry data in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Manning, Evan

    1994-01-01

    With a restructured software architecture for telemetry system control and data processing, the NASA/Deep Space Network (DSN) has substantially improved its ability to accommodate a wide variety of spacecraft in an era of 'better, faster, cheaper'. In the new architecture, the permanent software implements all capabilities needed by any system user, and text tables specify how these capabilities are to be used for each spacecraft. Most changes can now be made rapidly, outside of the traditional software development cycle. The system can be updated to support a new spacecraft through table changes rather than software changes, reducing the implementation, test, and delivery cycle for such a change from three months to three weeks. The mechanical separation of the text table files from the program software, with tables only loaded into memory when that mission is being supported, dramatically reduces the level of regression testing required. The format of each table is a different compromise between ease of human interpretation, efficiency of computer interpretation, and flexibility.

  6. Facilitating text reading in posterior cortical atrophy.

    PubMed

    Yong, Keir X X; Rajdev, Kishan; Shakespeare, Timothy J; Leff, Alexander P; Crutch, Sebastian J

    2015-07-28

    We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%-270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. © 2015 American Academy of Neurology.

  7. Facilitating text reading in posterior cortical atrophy

    PubMed Central

    Rajdev, Kishan; Shakespeare, Timothy J.; Leff, Alexander P.; Crutch, Sebastian J.

    2015-01-01

    Objective: We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. Methods: Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). Results: Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%–270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. Conclusions: These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. Classification of evidence: This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. PMID:26138948

  8. Degrees of systematic thoroughness: A text analysis of student technical science writing

    NASA Astrophysics Data System (ADS)

    Esch, Catherine Julia

    This dissertation investigates student technical science writing and use of evidence. Student writers attended a writing-intensive undergraduate university oceanography course where they were required to write a technical paper drawing from an instructor-designed software program, Our Dynamic Planet. This software includes multiple interactive geological data sets relevant to plate tectonics. Through qualitative text analysis of students science writing, two research questions frame the study asking: How are the papers textually structured? Are there distinctions between high- and low-rated papers? General and specific text characteristics within three critical sections of the technical paper are identified and analyzed (Observations, Interpretations, Conclusions). Specific text characteristics consist of typical types of figures displayed in the papers, and typical statements within each paper section. Data gathering consisted of collecting 15 student papers which constitute the population of study. An analytical method was designed to manage and analyze the text characteristics. It has three stages: identifying coding categories, re-formulating the categories, and configuring categories. Three important elements emerged that identified notable distinctions in paper quality: data display and use, narration of complex geological feature relationships, and overall organization of text structure. An inter-rater coding concordance check was conducted, and showed high concordance ratios for the coding of each section: Observations = 0.95; Interpretations = 0.93; and Conclusions = 0.87. These categories collectively reveal a larger pattern of general differences in the paper quality levels (high, low, medium). This variation in the quality of papers demonstrates degrees of systematic thoroughness, which is defined as how systematically each student engages in the tasks of the assignment, and how thoroughly and consistently the student follows through on that systematic commitment. Characterizations of each paper level indicate areas that can be explored to develop an explicit instructional pedagogy to support a greater number of low and medium level students. Implications suggest most students require greater explicit instruction. This involves having students pay more attention to detail, and to demonstrate greater follow through in order to produce a solid scientific argument in their technical papers.

  9. Computer-aided evaluation of low-dose and low-contrast agent third-generation dual-source CT angiography prior to transcatheter aortic valve implantation (TAVI).

    PubMed

    Dankerl, Peter; Hammon, Matthias; Seuss, Hannes; Tröbs, Monique; Schuhbaeck, Annika; Hell, Michaela M; Cavallaro, Alexander; Achenbach, Stephan; Uder, Michael; Marwan, Mohamed

    2017-05-01

    To evaluate the performance of computer-aided evaluation software for a comprehensive workup of patients prior to transcatheter aortic valve implantation (TAVI) using low-contrast agent and low radiation dose third-generation dual-source CT angiography. We evaluated 30 consecutive patients scheduled for TAVI. All patients underwent ECG-triggered high-pitch dual-source CT angiography of the aortic root and aorta with a standardized contrast agent volume (30 ml Imeron350, flow rate 4 ml/s) and low-dose (100 kv/350 mAs) protocol. An expert (10 years of experience) manually evaluated aortic root and iliac access dimensions (distance between coronary ostia and aortic annulus, minimal/maximal diameters and area-derived diameter of the aortic annulus) and best CT-predicted fluoroscopic projection angle as the reference standard. Utilizing computer-aided software (syngo.via), the same pre-TAVI workup was performed and compared to the reference standard. Mean CTDI[Formula: see text] was 3.46 mGy and mean DLP 217.6 ± 12.1 mGy cm, corresponding to a mean effective dose of 3.7 ± 0.2 mSv. Computer-aided evaluation was successful in all but one patient. Compared to the reference standard, Bland-Altman analysis indicated very good agreement for the distances between aortic annulus and coronary ostia (RCA: mean difference 0.8 mm; 95 % CI 0.4-1.2 mm; LM: mean difference 0.9 mm; 95 % CI 0.5-1.3 mm); however, we demonstrated a systematic overestimation of annulus- derived diameter using the software (mean difference 44.4 mm[Formula: see text]; 95 % CI 30.4-58.3 mm[Formula: see text]). Based on respective annulus dimensions, the recommended prosthesis size (Edwards SAPIEN 3) matched in 26 out of the 29 patients (90 %). CT-derived fluoroscopic projection angles showed an excellent agreement for both methods. Out of 58 iliac arteries, 15 (25 %) arteries could not be segmented by the software. Preprocessing time of the software was 71 ± 11 s (range 51-96 s), and reading time with the software was 118 ± 31 s (range 68-201 s). In the workup of pre-TAVI CT angiography, computer-aided evaluation of low-contrast, low-dose examinations is feasible with good agreement and quick reading time. However, a systematic overestimation of the aortic annulus area is observed.

  10. Design Considerations in Development of a Mobile Health Intervention Program: The TEXT ME and TEXTMEDS Experience

    PubMed Central

    Thakkar, Jay; Barry, Tony; Thiagalingam, Aravinda; Redfern, Julie; McEwan, Alistair L; Rodgers, Anthony

    2016-01-01

    Background Mobile health (mHealth) has huge potential to deliver preventative health services. However, there is paucity of literature on theoretical constructs, technical, practical, and regulatory considerations that enable delivery of such services. Objectives The objective of this study was to outline the key considerations in the development of a text message-based mHealth program; thus providing broad recommendations and guidance to future researchers designing similar programs. Methods We describe the key considerations in designing the intervention with respect to functionality, technical infrastructure, data management, software components, regulatory requirements, and operationalization. We also illustrate some of the potential issues and decision points utilizing our experience of developing text message (short message service, SMS) management systems to support 2 large randomized controlled trials: TEXT messages to improve MEDication adherence & Secondary prevention (TEXTMEDS) and Tobacco, EXercise and dieT MEssages (TEXT ME). Results The steps identified in the development process were: (1) background research and development of the text message bank based on scientific evidence and disease-specific guidelines, (2) pilot testing with target audience and incorporating feedback, (3) software-hardware customization to enable delivery of complex personalized programs using prespecified algorithms, and (4) legal and regulatory considerations. Additional considerations in developing text message management systems include: balancing the use of customized versus preexisting software systems, the level of automation versus need for human inputs, monitoring, ensuring data security, interface flexibility, and the ability for upscaling. Conclusions A merging of expertise in clinical and behavioral sciences, health and research data management systems, software engineering, and mobile phone regulatory requirements is essential to develop a platform to deliver and manage support programs to hundreds of participants simultaneously as in TEXT ME and TEXTMEDS trials. This research provides broad principles that may assist other researchers in developing mHealth programs. PMID:27847350

  11. Design Considerations in Development of a Mobile Health Intervention Program: The TEXT ME and TEXTMEDS Experience.

    PubMed

    Thakkar, Jay; Barry, Tony; Thiagalingam, Aravinda; Redfern, Julie; McEwan, Alistair L; Rodgers, Anthony; Chow, Clara K

    2016-11-15

    Mobile health (mHealth) has huge potential to deliver preventative health services. However, there is paucity of literature on theoretical constructs, technical, practical, and regulatory considerations that enable delivery of such services. The objective of this study was to outline the key considerations in the development of a text message-based mHealth program; thus providing broad recommendations and guidance to future researchers designing similar programs. We describe the key considerations in designing the intervention with respect to functionality, technical infrastructure, data management, software components, regulatory requirements, and operationalization. We also illustrate some of the potential issues and decision points utilizing our experience of developing text message (short message service, SMS) management systems to support 2 large randomized controlled trials: TEXT messages to improve MEDication adherence & Secondary prevention (TEXTMEDS) and Tobacco, EXercise and dieT MEssages (TEXT ME). The steps identified in the development process were: (1) background research and development of the text message bank based on scientific evidence and disease-specific guidelines, (2) pilot testing with target audience and incorporating feedback, (3) software-hardware customization to enable delivery of complex personalized programs using prespecified algorithms, and (4) legal and regulatory considerations. Additional considerations in developing text message management systems include: balancing the use of customized versus preexisting software systems, the level of automation versus need for human inputs, monitoring, ensuring data security, interface flexibility, and the ability for upscaling. A merging of expertise in clinical and behavioral sciences, health and research data management systems, software engineering, and mobile phone regulatory requirements is essential to develop a platform to deliver and manage support programs to hundreds of participants simultaneously as in TEXT ME and TEXTMEDS trials. This research provides broad principles that may assist other researchers in developing mHealth programs. ©Jay Thakkar, Tony Barry, Aravinda Thiagalingam, Julie Redfern, Alistair L McEwan, Anthony Rodgers, Clara K Chow. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 15.11.2016.

  12. JAva GUi for Applied Research (JAGUAR) v 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JAGUAR is a Java software tool for automatically rendering a graphical user interface (GUI) from a structured input specification. It is designed as a plug-in to the Eclipse workbench to enable users to create, edit, and externally execute analysis application input decks and then view the results. JAGUAR serves as a GUI for Sandia's DAKOTA software toolkit for optimization and uncertainty quantification. It will include problem (input deck)set-up, option specification, analysis execution, and results visualization. Through the use of wizards, templates, and views, JAGUAR helps uses navigate the complexity of DAKOTA's complete input specification. JAGUAR is implemented in Java, leveragingmore » Eclipse extension points and Eclipse user interface. JAGUAR parses a DAKOTA NIDR input specification and presents the user with linked graphical and plain text representations of problem set-up and option specification for DAKOTA studies. After the data has been input by the user, JAGUAR generates one or more input files for DAKOTA, executes DAKOTA, and captures and interprets the results« less

  13. Four applications of a software data collection and analysis methodology

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.

  14. Solar Astronomy Data Base: Packaged Information on Diskette

    NASA Technical Reports Server (NTRS)

    Mckinnon, John A.

    1990-01-01

    In its role as a library, the National Geophysical Data Center has transferred to diskette a collection of small, digital files of routinely measured solar indices for use on an IBM-compatible desktop computer. Recording these observations on diskette allows the distribution of specialized information to researchers with a wide range of expertise in computer science and solar astronomy. Every data set was made self-contained by including formats, extraction utilities, and plain-language descriptive text. Moreover, for several archives, two versions of the observations are provided - one suitable for display, the other for analysis with popular software packages. Since the files contain no control characters, each one can be modified with any text editor.

  15. Free software to analyse the clinical relevance of drug interactions with antiretroviral agents (SIMARV®) in patients with HIV/AIDS.

    PubMed

    Giraldo, N A; Amariles, P; Monsalve, M; Faus, M J

    Highly active antiretroviral therapy has extended the expected lifespan of patients with HIV/AIDS. However, the therapeutic benefits of some drugs used simultaneously with highly active antiretroviral therapy may be adversely affected by drug interactions. The goal was to design and develop a free software to facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. A comprehensive Medline/PubMed database search of drug interactions was performed. Articles that recognized any drug interactions in HIV disease were selected. The publications accessed were limited to human studies in English or Spanish, with full texts retrieved. Drug interactions were analyzed, assessed, and grouped into four levels of clinical relevance according to gravity and probability. Software to systematize the information regarding drug interactions and their clinical relevance was designed and developed. Overall, 952 different references were retrieved and 446 selected; in addition, 67 articles were selected from the citation lists of identified articles. A total of 2119 pairs of drug interactions were identified; of this group, 2006 (94.7%) were drug-drug interactions, 1982 (93.5%) had an identified pharmacokinetic mechanism, and 1409 (66.5%) were mediated by enzyme inhibition. In terms of clinical relevance, 1285 (60.6%) drug interactions were clinically significant in patients with HIV (levels 1 and 2). With this information, a software program that facilitates identification and assessment of the clinical relevance of antiretroviral drug interactions (SIMARV ® ) was developed. A free software package with information on 2119 pairs of antiretroviral drug interactions was designed and developed that could facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A measuring tool for tree-rings analysis

    NASA Astrophysics Data System (ADS)

    Shumilov, Oleg; Kanatjev, Alexander; Kasatkina, Elena

    2013-04-01

    A special tool has been created for the annual tree-ring widths measurement and analysis. It consists of professional scanner, computer system and software. This created complex in many aspects does not yield the similar systems (LINTAB, WinDENDRO), but in comparison to manual measurement systems, it offers a number of advantages: productivity gain, possibility of archiving the results of the measurements at any stage of the processing, operator comfort. It has been developed a new software, allowing processing of samples of different types (cores, saw cuts), including those which is difficult to process, having got a complex wood structure (inhomogeneity of growing in different directions, missed, light and false rings etc.). This software can analyze pictures made with optical scanners, analog or digital cameras. The complex software program was created on programming language C++, being compatible with modern operating systems like Windows X. Annual ring widths are measured along paths traced interactively. These paths can have any orientation and can be created so that ring widths are measured perpendicular to ring boundaries. A graphic of ring-widths in function of the year is displayed on a screen during the analysis and it can be used for visual and numerical cross-dating and comparison with other series or master-chronologies. Ring widths are saved to the text files in a special format, and those files are converted to the format accepted for data conservation in the International Tree-Ring Data Bank. The created complex is universal in application that will allow its use for decision of the different problems in biology and ecology. With help of this complex it has been reconstructed a long-term juniper (1328-2004) and pine (1445-2005) tree-ring chronologies on the base of samples collected at Kola Peninsula (northwestern Russia).

  17. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software.

    PubMed

    Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad

    2015-01-01

    Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.

  18. GPFrontend and GPGraphics: graphical analysis tools for genetic association studies.

    PubMed

    Uebe, Steffen; Pasutto, Francesca; Krumbiegel, Mandy; Schanze, Denny; Ekici, Arif B; Reis, André

    2010-09-21

    Most software packages for whole genome association studies are non-graphical, purely text based programs originally designed to run with UNIX-like operating systems. Graphical output is often not intended or supposed to be performed with other command line tools, e.g. gnuplot. Using the Microsoft .NET 2.0 platform and Visual Studio 2005, we have created a graphical software package to analyze data from microarray whole genome association studies, both for a DNA-pooling based approach as well as regular single sample data. Part of this package was made to integrate with GenePool 0.8.2, a previously existing software suite for GNU/Linux systems, which we have modified to run in a Microsoft Windows environment. Further modifications cause it to generate some additional data. This enables GenePool to interact with the .NET parts created by us. The programs we developed are GPFrontend, a graphical user interface and frontend to use GenePool and create metadata files for it, and GPGraphics, a program to further analyze and graphically evaluate output of different WGA analysis programs, among them also GenePool. Our programs enable regular MS Windows users without much experience in bioinformatics to easily visualize whole genome data from a variety of sources.

  19. Knowledge Based Software Assistant Conference Proceedings (4th) Held in Syracuse, New York on 12-14 September 1989

    DTIC Science & Technology

    1990-05-01

    Sanders Associates. Inc. A demonstration of knowledge-based support for the evolut ;cnry development of software system requirements uskig mitV/9 text...Conference Commiffee W Douga W~t Spin-Off Technologies 4 AN OVERVIEW OF RADC’S KNOWLEDGE BASED SOFTWARE ASSISTANT PROGRAM Donald M. Elefante Rome Air...Knowledge-Based Software Assistant is a formally based, computer-mediated paradigm for the specification, development, evolution , and Ir ig term

  20. Native Language Processing using Exegy Text Miner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Compton, J

    2007-10-18

    Lawrence Livermore National Laboratory's New Architectures Testbed recently evaluated Exegy's Text Miner appliance to assess its applicability to high-performance, automated native language analysis. The evaluation was performed with support from the Computing Applications and Research Department in close collaboration with Global Security programs, and institutional activities in native language analysis. The Exegy Text Miner is a special-purpose device for detecting and flagging user-supplied patterns of characters, whether in streaming text or in collections of documents at very high rates. Patterns may consist of simple lists of words or complex expressions with sub-patterns linked by logical operators. These searches are accomplishedmore » through a combination of specialized hardware (i.e., one or more field-programmable gates arrays in addition to general-purpose processors) and proprietary software that exploits these individual components in an optimal manner (through parallelism and pipelining). For this application the Text Miner has performed accurately and reproducibly at high speeds approaching those documented by Exegy in its technical specifications. The Exegy Text Miner is primarily intended for the single-byte ASCII characters used in English, but at a technical level its capabilities are language-neutral and can be applied to multi-byte character sets such as those found in Arabic and Chinese. The system is used for searching databases or tracking streaming text with respect to one or more lexicons. In a real operational environment it is likely that data would need to be processed separately for each lexicon or search technique. However, the searches would be so fast that multiple passes should not be considered as a limitation a priori. Indeed, it is conceivable that large databases could be searched as often as necessary if new queries were deemed worthwhile. This project is concerned with evaluating the Exegy Text Miner installed in the New Architectures Testbed running under software version 2.0. The concrete goals of the evaluation were to test the speed and accuracy of the Exegy and explore ways that it could be employed in current or future text-processing projects at Lawrence Livermore National Laboratory (LLNL). This study extended beyond this to evaluate its suitability for processing foreign language sources. The scope of this study was limited to the capabilities of the Exegy Text Miner in the file search mode and does not attempt simulating the streaming mode. Since the capabilities of the machine are invariant to the choice of input mode and since timing should not depend on this choice, it was felt that the added effort was not necessary for this restricted study.« less

  1. [Application of text mining approach to pre-education prior to clinical practice].

    PubMed

    Koinuma, Masayoshi; Koike, Katsuya; Nakamura, Hitoshi

    2008-06-01

    We developed a new survey analysis technique to understand students' actual aims for effective pretraining prior to clinical practice. We asked third-year undergraduate students to write fixed-style complete and free sentences on "preparation of drug dispensing." Then, we converted their sentence data in to text style and performed Japanese-language morphologic analysis on the data using language analysis software. We classified key words, which were created on the basis of the word class information of the Japanese language morphologic analysis, into categories based on causes and characteristics. In addition to this, we classified the characteristics into six categories consisting of those concepts including "knowledge," "skill and attitude," "image," etc. with the KJ method technique. The results showed that the awareness of students of "preparation of drug dispensing" tended to be approximately three-fold more frequent in "skill and attitude," "risk," etc. than in "knowledge." Regarding the characteristics in the category of the "image," words like "hard," "challenging," "responsibility," "life," etc. frequently occurred. The results of corresponding analysis showed that the characteristics of the words "knowledge" and "skills and attitude" were independent. As the result of developing a cause-and-effect diagram, it was demonstrated that the phase "hanging tough" described most of the various factors. We thus could understand students' actual feelings by applying text-mining as a new survey analysis technique.

  2. Automated interpretation of home blood pressure assessment (Hy-Result software) versus physician's assessment: a validation study.

    PubMed

    Postel-Vinay, Nicolas; Bobrie, Guillaume; Ruelland, Alan; Oufkir, Majida; Savard, Sebastien; Persu, Alexandre; Katsahian, Sandrine; Plouin, Pierre F

    2016-04-01

    Hy-Result is the first software for self-interpretation of home blood pressure measurement results, taking into account both the recommended thresholds for normal values and patient characteristics. We compare the software-generated classification with the physician's evaluation. The primary assessment criterion was whether algorithm classification of the blood pressure (BP) status concurred with the physician's advice (blinded to the software's results) following a consultation (n=195 patients). Secondary assessment was the reliability of text messages. In the 58 untreated patients, the agreement between classification of the BP status generated by the software and the physician's classification was 87.9%. In the 137 treated patients, the agreement was 91.9%. The κ-test applied for all the patients was 0.81 (95% confidence interval: 0.73-0.89). After correction of errors identified in the algorithm during the study, agreement increased to 95.4% [κ=0.9 (95% confidence interval: 0.84-0.97)]. For 100% of the patients with comorbidities (n=46), specific text messages were generated, indicating that a physician might recommend a target BP lower than 135/85 mmHg. Specific text messages were also generated for 100% of the patients for whom global cardiovascular risks markedly exceeded norms. Classification by Hy-Result is at least as accurate as that of a specialist in current practice (http://www.hy-result.com).

  3. Text File Comparator

    NASA Technical Reports Server (NTRS)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  4. Prete-moi Ton Logiciel pour Ecrire un Mot (Lend Me Your Software Program So I Can Write a Letter).

    ERIC Educational Resources Information Center

    Mangenot, Francois

    1993-01-01

    A brief discussion and description of one commercially available software package ("Pour Ecrire un Mot") for writing letters of various types uses the love letter as an example of the software's functioning. Answers to the prompting questions on the screen determine the few variable parameters of the text to be generated. (four…

  5. An ontology based trust verification of software license agreement

    NASA Astrophysics Data System (ADS)

    Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo

    2017-08-01

    When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.

  6. Debugging and Performance Analysis Software Tools for Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL Debugging and Performance Analysis Software Tools for Peregrine System Debugging and Performance Analysis Software Tools for Peregrine System Learn about debugging and performance analysis software tools available to use with the Peregrine system. Allinea

  7. Interactive Visualization of Computational Fluid Dynamics using Mosaic

    NASA Technical Reports Server (NTRS)

    Clucas, Jean; Watson, Velvin; Chancellor, Marisa K. (Technical Monitor)

    1994-01-01

    The Web provides new Methods for accessing Information world-wide, but the current text-and-pictures approach neither utilizes all the Web's possibilities not provides for its limitations. While the inclusion of pictures and animations in a paper communicates more effectively than text alone, It Is essentially an extension of the concept of "publication." Also, as use of the Web increases putting images and animations online will quickly load even the "Information Superhighway." We need to find forms of communication that take advantage of the special nature of the Web. This paper presents one approach: the use of the Internet and the Mosaic interface for data sharing and collaborative analysis. We will describe (and In the presentation, demonstrate) our approach: using FAST (Flow Analysis Software Toolkit), a scientific visualization package, as a data viewer and interactive tool called from MOSAIC. Our intent is to stimulate the development of other tools that utilize the unique nature of electronic communication.

  8. Educational interactive multimedia software: The impact of interactivity on learning

    NASA Astrophysics Data System (ADS)

    Reamon, Derek Trent

    This dissertation discusses the design, development, deployment and testing of two versions of educational interactive multimedia software. Both versions of the software are focused on teaching mechanical engineering undergraduates about the fundamentals of direct-current (DC) motor physics and selection. The two versions of Motor Workshop software cover the same basic materials on motors, but differ in the level of interactivity between the students and the software. Here, the level of interactivity refers to the particular role of the computer in the interaction between the user and the software. In one version, the students navigate through information that is organized by topic, reading text, and viewing embedded video clips; this is referred to as "low-level interactivity" software because the computer simply presents the content. In the other version, the students are given a task to accomplish---they must design a small motor-driven 'virtual' vehicle that competes against computer-generated opponents. The interaction is guided by the software which offers advice from 'experts' and provides contextual information; we refer to this as "high-level interactivity" software because the computer is actively participating in the interaction. The software was used in two sets of experiments, where students using the low-level interactivity software served as the 'control group,' and students using the highly interactive software were the 'treatment group.' Data, including pre- and post-performance tests, questionnaire responses, learning style characterizations, activity tracking logs and videotapes were collected for analysis. Statistical and observational research methods were applied to the various data to test the hypothesis that the level of interactivity effects the learning situation, with higher levels of interactivity being more effective for learning. The results show that both the low-level and high-level interactive versions of the software were effective in promoting learning about the subject of motors. The focus of learning varied between users of the two versions, however. The low-level version was more effective for teaching concepts and terminology, while the high-level version seemed to be more effective for teaching engineering applications.

  9. User Interactive Software for Analysis of Human Physiological Data

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Toscano, William; Taylor, Bruce C.; Acharya, Soumydipta

    2006-01-01

    Ambulatory physiological monitoring has been used to study human health and performance in space and in a variety of Earth-based environments (e.g., military aircraft, armored vehicles, small groups in isolation, and patients). Large, multi-channel data files are typically recorded in these environments, and these files often require the removal of contaminated data prior to processing and analyses. Physiological data processing can now be performed with user-friendly, interactive software developed by the Ames Psychophysiology Research Laboratory. This software, which runs on a Windows platform, contains various signal-processing routines for both time- and frequency- domain data analyses (e.g., peak detection, differentiation and integration, digital filtering, adaptive thresholds, Fast Fourier Transform power spectrum, auto-correlation, etc.). Data acquired with any ambulatory monitoring system that provides text or binary file format are easily imported to the processing software. The application provides a graphical user interface where one can manually select and correct data artifacts utilizing linear and zero interpolation and adding trigger points for missed peaks. Block and moving average routines are also provided for data reduction. Processed data in numeric and graphic format can be exported to Excel. This software, PostProc (for post-processing) requires the Dadisp engineering spreadsheet (DSP Development Corp), or equivalent, for implementation. Specific processing routines were written for electrocardiography, electroencephalography, electromyography, blood pressure, skin conductance level, impedance cardiography (cardiac output, stroke volume, thoracic fluid volume), temperature, and respiration

  10. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The purpose was to investigate and evaluate the interchange of application- specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. See Engineering Analysis Using a Web-Based Protocol by J.D. Schoeffler and R.W. Claus, NASA TM-2002-211981, October 2002. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  11. Books Online: Visions, Plans, and Perspectives for Electronic Text.

    ERIC Educational Resources Information Center

    Basch, Reva

    1991-01-01

    Discussion of current applications of and future possibilities for electronic text, or e-text, focuses on activities in the area of higher education. Topics covered are input technology, including optical scanners and keyboarding; standardization; copyright issues; access to e-text through networks; user interface; hypertext; software; shareware;…

  12. Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass digitization projects

    NASA Astrophysics Data System (ADS)

    Ben Salah, Ahmed; Ragot, Nicolas; Paquet, Thierry

    2013-01-01

    The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.

  13. Urban weather data and building models for the inclusion of the urban heat island effect in building performance simulation.

    PubMed

    Palme, M; Inostroza, L; Villacreses, G; Lobato, A; Carrasco, C

    2017-10-01

    This data article presents files supporting calculation for urban heat island (UHI) inclusion in building performance simulation (BPS). Methodology is used in the research article "From urban climate to energy consumption. Enhancing building performance simulation by including the urban heat island effect" (Palme et al., 2017) [1]. In this research, a Geographical Information System (GIS) study is done in order to statistically represent the most important urban scenarios of four South-American cities (Guayaquil, Lima, Antofagasta and Valparaíso). Then, a Principal Component Analysis (PCA) is done to obtain reference Urban Tissues Categories (UTC) to be used in urban weather simulation. The urban weather files are generated by using the Urban Weather Generator (UWG) software (version 4.1 beta). Finally, BPS is run out with the Transient System Simulation (TRNSYS) software (version 17). In this data paper, four sets of data are presented: 1) PCA data (excel) to explain how to group different urban samples in representative UTC; 2) UWG data (text) to reproduce the Urban Weather Generation for the UTC used in the four cities (4 UTC in Lima, Guayaquil, Antofagasta and 5 UTC in Valparaíso); 3) weather data (text) with the resulting rural and urban weather; 4) BPS models (text) data containing the TRNSYS models (four building models).

  14. Estimating Computer-Based Training Development Times

    DTIC Science & Technology

    1987-10-14

    beginners , must be sure they interpret terms correctly. As a result of this informal validation, the authors suggest refinements in the tool which...Productivity tools available: automated design tools, text processor interfaces, flowcharting software, software interfaces a Multimedia interfaces e

  15. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  16. Nursing identity and patient-centredness in scholarly health services research: a computational text analysis of PubMed abstracts 1986-2013.

    PubMed

    Bell, Erica; Campbell, Steve; Goldberg, Lynette R

    2015-01-22

    The most important and contested element of nursing identity may be the patient-centredness of nursing, though this concept is not well-treated in the nursing identity literature. More conceptually-based mapping of nursing identity constructs are needed to help nurses shape their identity. The field of computational text analytics offers new opportunities to scrutinise how growing disciplines such as health services research construct nursing identity. This paper maps the conceptual content of scholarly health services research in PubMed as it relates to the patient-centeredness of nursing. Computational text analytics software was used to analyse all health services abstracts in the database PubMed since 1986. Abstracts were treated as indicative of the content of health services research. The database PubMed was searched for all research papers using the term "service" or "services" in the abstract or keywords for the period 01/01/1986 to 30/06/2013. A total of 234,926 abstracts were obtained. Leximancer software was used in 1) mapping of 4,144,458 instances of 107 concepts; 2) analysis of 106 paired concept co-occurrences for the nursing concept; and 3) sentiment analysis of the nursing concept versus patient, family and community concepts, and clinical concepts. Nursing is constructed within quality assurance or service implementation or workforce development concepts. It is relatively disconnected from patient, family or community care concepts. For those who agree that patient-centredness should be a part of nursing identity in practice, this study suggests that there is a need for development of health services research into both the nature of the caring construct in nursing identity and its expression in practice. More fundamentally, the study raises questions about whether health services research cultures even value the politically popular idea of nurses as patient-centred caregivers and whether they should.

  17. Data analysis software for the autoradiographic enhancement process. Volumes 1, 2, and 3, and appendix

    NASA Technical Reports Server (NTRS)

    Singh, S. P.

    1979-01-01

    The computer software developed to set up a method for Wiener spectrum analysis of photographic films is presented. This method is used for the quantitative analysis of the autoradiographic enhancement process. The software requirements and design for the autoradiographic enhancement process are given along with the program listings and the users manual. A software description and program listings modification of the data analysis software are included.

  18. DATALINK: Records inventory data collection software. User`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, B.A.

    1995-03-01

    DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products. It runs on virtually any computer us MS-DOS.

  19. Tips for Good Electronic Presentations.

    ERIC Educational Resources Information Center

    Strasser, Dennis

    1996-01-01

    Describes library uses of presentation graphics software and offers tips for creating electronic presentations. Tips include: audience retention; visual aid options; software package options; presentation planning; presentation showing; and use of text, colors, and graphics. Sidebars note common presentation errors and popular presentation…

  20. Using Text Analytics of AJPE Article Titles to Reveal Trends In Pharmacy Education Over the Past Two Decades.

    PubMed

    Pedrami, Farnoush; Asenso, Pamela; Devi, Sachin

    2016-08-25

    Objective. To identify trends in pharmacy education during last two decades using text mining. Methods. Articles published in the American Journal of Pharmaceutical Education (AJPE) in the past two decades were compiled in a database. Custom text analytics software was written using Visual Basic programming language in the Visual Basic for Applications (VBA) editor of Excel 2007. Frequency of words appearing in article titles was calculated using the custom VBA software. Data were analyzed to identify the emerging trends in pharmacy education. Results. Three educational trends emerged: active learning, interprofessional, and cultural competency. Conclusion. The text analytics program successfully identified trends in article topics and may be a useful compass to predict the future course of pharmacy education.

  1. The QDP/PLT user's guide

    NASA Technical Reports Server (NTRS)

    Tennant, Allyn F.

    1991-01-01

    PLT is a high level plotting package. A Programmer can create a default plot suited for the data being displayed. At run times, users can then interact with the plot overriding any or all of these defaults. The user is also provided the capability to fit functions to the displayed data. This ability to display, interact with, and to fit the data make PLT a useful tool in the analysis of data. The Quick and Dandy Plotter (QDP) program will read ASCII text files that contain PLT commands and data. Thus, QDP provides and easy way to use the PLT software QPD files provide a convenient way to exchange data. The QPD/PLT software is written in standard FORTRAN 77 and has been ported to VAX VMS, SUN UNIX, IBM AIX, NeXT NextStep, and MS-DOS systems.

  2. Infusing Reliability Techniques into Software Safety Analysis

    NASA Technical Reports Server (NTRS)

    Shi, Ying

    2015-01-01

    Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

  3. KymoKnot: A web server and software package to identify and locate knots in trajectories of linear or circular polymers.

    PubMed

    Tubiana, Luca; Polles, Guido; Orlandini, Enzo; Micheletti, Cristian

    2018-06-07

    The KymoKnot software package and web server identifies and locates physical knots or proper knots in a series of polymer conformations. It is mainly intended as an analysis tool for trajectories of linear or circular polymers, but it can be used on single instances too, e.g. protein structures in PDB format. A key element of the software package is the so-called minimally interfering chain closure algorithm that is used to detect physical knots in open chains and to locate the knotted region in both open and closed chains. The web server offers a user-friendly graphical interface that identifies the knot type and highlights the knotted region on each frame of the trajectory, which the user can visualize interactively from various viewpoints. The dynamical evolution of the knotted region along the chain contour is presented as a kymograph. All data can be downloaded in text format. The KymoKnot package is licensed under the BSD 3-Clause licence. The server is publicly available at http://kymoknot.sissa.it/kymoknot/interactive.php .

  4. Recommendations and settings of word prediction software by health-related professionals for patients with spinal cord injury: a prospective observational study.

    PubMed

    Pouplin, Samuel; Roche, Nicolas; Hugeron, Caroline; Vaugier, Isabelle; Bensmail, Djamel

    2016-02-01

    For people with cervical spinal cord injury (SCI), access to computers can be difficult, thus several devices have been developed to facilitate their use. However, text input speed remains very slow compared to users who do not have a disability, even with these devices. Several methods have been developed to increase text input speed, such as word prediction software (WPS). Health-related professionals (HRP) often recommend this type of software to people with cervical SCI. WPS can be customized using different settings. It is likely that the settings used will influence the effectiveness of the software on text input speed. However, there is currently a lack of literature regarding professional practices for the setting of WPS as well as the impact for users. To analyze word prediction software settings used by HRP for people with cervical SCI. Prospective observational study. Garches, France; health-related professionals who recommend Word Prediction Software. A questionnaire was submitted to HRP who advise tetraplegic people regarding the use of communication devices. A total of 93 professionals responded to the survey. The most frequently recommended software was Skippy, a commercially available software. HRP rated the importance of the possibility to customise the settings as high. Moreover, they rated some settings as more important than others (P<0.001). However, except for the number of words displayed, each setting was configured by less than 50% of HRP. The results showed that there was a difference between the perception of the importance of some settings and data in the literature regarding the optimization of settings. Moreover, although some parameters were considered as very important, they were rarely specifically configured. Confidence in default settings and lack of information regarding optimal settings seem to be the main reasons for this discordance. This could also explain the disparate results of studies which evaluated the impact of WPS on text input speed in people with cervical SCI. The results showed that there was a difference between the perception of the importance of some settings and data in the literature regarding the optimization of settings. Moreover, although some parameters were considered as very important, they were rarely specifically configured. Confidence in default settings and lack of information regarding optimal settings seem to be the main reasons for this discordance. This could also explain the disparate results of studies which evaluated the impact of WPS on text input speed in people with cervical SCI. Professionals tend to have confidence in default settings, despite the fact they are not always appropriate for users. It thus seems essential to develop information networks and training to disseminate the results of studies and in consequence possibly improve communication for people with cervical SCI who use such devices.

  5. OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis

    NASA Astrophysics Data System (ADS)

    Grohmann, C. H.; Campanha, G. A.

    2010-12-01

    Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of each dataset (poles, great circles etc). The GUI shows the opened files in a tree structure, similar to “layers” of many illustration software, where the vertical order of the files in the tree reflects the drawing order of the selected elements. At this stage, the software performs plotting operations of poles to planes, lineations, great circles, density contours and rose diagrams. A set of statistics is calculated for each file and its eigenvalues and eigenvectors are used to suggest if the data is clustered about a mean value or distributed along a girdle. Modified Flinn, Triangular and histograms plots are also available. Next step of development will focus on tools as merging and rotation of datasets, possibility to save 'projects' and paleostress analysis. In its current state, OpenStereo requires Python, wxPython, Numpy and Matplotlib installed in the system. We recommend installing PythonXY or the Enthought Python Distribution on MS-Windows and MacOS machines, since all dependencies are provided. Most Linux distributions provide an easy way to install all dependencies through software repositories. OpenStereo is released under the GNU General Public License. Programmers willing to contribute are encouraged to contact the authors directly. FAPESP Grant #09/17675-5

  6. Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.

    ERIC Educational Resources Information Center

    Pieska, K. A. O.

    1986-01-01

    Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)

  7. Re-Writing the Construction History of Boughton House (northamptonshire, Uk) with the Help of DOCU-TOOLS®

    NASA Astrophysics Data System (ADS)

    Schuster, J. C.

    2017-08-01

    The tablet-based software docu-tools digitize the documentation of buildings, simplifies construction and facility management and the data analysis in building and construction-history research. As a plan-based software, `pins' can be set to record data (images, audio, text etc.), each data point containing a time and date stamp. Once a pin is set and information recorded, it can never be deleted from the system, creating clear contentious-free documentation. Reports to any/all data recorded can immediately be generated through various templates in order to share, document, analyze and archive the information gathered. The software both digitizes building condition assessment, as well as simplifies the fully documented management and solving of problems and monitoring of a building. Used both in the construction industry and for documenting and analyzing historic buildings, docu-tools is a versatile and flexible tool that has become integral to my work as a building historian working on the conservation and curating of the historic built environment in Europe. I used the software at Boughton House, Northamptonshire, UK, during a one-year research project into the construction history of the building. The details of how docu-tools was used during this project will be discussed in this paper.

  8. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.

    PubMed

    Satorra, Albert; Bentler, Peter M

    2010-06-01

    A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.

  9. Methodology for automating software systems. Task 1 of the foundations for automating software systems

    NASA Technical Reports Server (NTRS)

    Moseley, Warren

    1989-01-01

    The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.

  10. Inertial Upper Stage (IUS) software analysis

    NASA Technical Reports Server (NTRS)

    Grayson, W. L.; Nickel, C. E.; Rose, P. L.; Singh, R. P.

    1979-01-01

    The Inertial Upper Stage (IUS) System, an extension of the Space Transportation System (STS) operating regime to include higher orbits, orbital plane changes, geosynchronous orbits, and interplanetary trajectories is presented. The IUS software design, the IUS software interfaces with other systems, and the cost effectiveness in software verification are described. Tasks of the IUS discussed include: (1) design analysis; (2) validation requirements analysis; (3) interface analysis; and (4) requirements analysis.

  11. Astronomical Software Directory Service

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Payne, Harry; Hayes, Jeffrey

    1997-01-01

    With the support of NASA's Astrophysics Data Program (NRA 92-OSSA-15), we have developed the Astronomical Software Directory Service (ASDS): a distributed, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URLs indexed for full-text searching. Users are performing about 400 searches per month. A new aspect of our service is the inclusion of telescope and instrumentation manuals, which prompted us to change the name to the Astronomical Software and Documentation Service. ASDS was originally conceived to serve two purposes: to provide a useful Internet service in an area of expertise of the investigators (astronomical software), and as a research project to investigate various architectures for searching through a set of documents distributed across the Internet. Two of the co-investigators were then installing and maintaining astronomical software as their primary job responsibility. We felt that a service which incorporated our experience in this area would be more useful than a straightforward listing of software packages. The original concept was for a service based on the client/server model, which would function as a directory/referral service rather than as an archive. For performing the searches, we began our investigation with a decision to evaluate the Isite software from the Center for Networked Information Discovery and Retrieval (CNIDR). This software was intended as a replacement for Wide-Area Information Service (WAIS), a client/server technology for performing full-text searches through a set of documents. Isite had some additional features that we considered attractive, and we enjoyed the cooperation of the Isite developers, who were happy to have ASDS as a demonstration project. We ended up staying with the software throughout the project, making modifications to take advantage of new features as they came along, as well as influencing the software development. The Web interface to the search engine is provided by a gateway program written in C++ by a consultant to the project (A. Warnock).

  12. Software Safety Progress in NASA

    NASA Technical Reports Server (NTRS)

    Radley, Charles F.

    1995-01-01

    NASA has developed guidelines for development and analysis of safety-critical software. These guidelines have been documented in a Guidebook for Safety Critical Software Development and Analysis. The guidelines represent a practical 'how to' approach, to assist software developers and safety analysts in cost effective methods for software safety. They provide guidance in the implementation of the recent NASA Software Safety Standard NSS-1740.13 which was released as 'Interim' version in June 1994, scheduled for formal adoption late 1995. This paper is a survey of the methods in general use, resulting in the NASA guidelines for safety critical software development and analysis.

  13. Grandmaster: Interactive text-based analytics of social media [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabian, Nathan D.; Davis, Warren Leon,; Raybourn, Elaine M.

    People use social media resources like Twitter, Facebook, forums etc. to share and discuss various activities or topics. By aggregating topic trends across many individuals using these services, we seek to construct a richer profile of a person’s activities and interests as well as provide a broader context of those activities. This profile may then be used in a variety of ways to understand groups as a collection of interests and affinities and an individual’s participation in those groups. Our approach considers that much of these data will be unstructured, free-form text. By analyzing free-form text directly, we may bemore » able to gain an implicit grouping of individuals with shared interests based on shared conversation, and not on explicit social software linking them. In this paper, we discuss a proof-of-concept application called Grandmaster built to pull short sections of text, a person’s comments or Twitter posts, together by analysis and visualization to allow a gestalt understanding of the full collection of all individuals: how groups are similar and how they differ, based on their text inputs.« less

  14. Grandmaster: Interactive text-based analytics of social media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabian, Nathan D.; Davis, Warren Leon,; Raybourn, Elaine M.

    People use social media resources like Twitter, Facebook, forums etc. to share and discuss various activities or topics. By aggregating topic trends across many individuals using these services, we seek to construct a richer profile of a person’s activities and interests as well as provide a broader context of those activities. This profile may then be used in a variety of ways to understand groups as a collection of interests and affinities and an individual’s participation in those groups. Our approach considers that much of these data will be unstructured, free-form text. By analyzing free-form text directly, we may bemore » able to gain an implicit grouping of individuals with shared interests based on shared conversation, and not on explicit social software linking them. In this paper, we discuss a proof-of-concept application called Grandmaster built to pull short sections of text, a person’s comments or Twitter posts, together by analysis and visualization to allow a gestalt understanding of the full collection of all individuals: how groups are similar and how they differ, based on their text inputs.« less

  15. A survey of Canadian medical physicists: software quality assurance of in-house software.

    PubMed

    Salomons, Greg J; Kelly, Diane

    2015-01-05

    This paper reports on a survey of medical physicists who write and use in-house written software as part of their professional work. The goal of the survey was to assess the extent of in-house software usage and the desire or need for related software quality guidelines. The survey contained eight multiple-choice questions, a ranking question, and seven free text questions. The survey was sent to medical physicists associated with cancer centers across Canada. The respondents to the survey expressed interest in having guidelines to help them in their software-related work, but also demonstrated extensive skills in the area of testing, safety, and communication. These existing skills form a basis for medical physicists to establish a set of software quality guidelines.

  16. Supporting Development of Satellite's Guidance Navigation and Control Software: A Product Line Approach

    NASA Technical Reports Server (NTRS)

    McComas, David; Stark, Michael; Leake, Stephen; White, Michael; Morisio, Maurizio; Travassos, Guilherme H.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The NASA Goddard Space Flight Center Flight Software Branch (FSB) is developing a Guidance, Navigation, and Control (GNC) Flight Software (FSW) product line. The demand for increasingly more complex flight software in less time while maintaining the same level of quality has motivated us to look for better FSW development strategies. The GNC FSW product line has been planned to address the core GNC FSW functionality very similar on many recent low/near Earth missions in the last ten years. Unfortunately these missions have not accomplished significant drops in development cost since a systematic approach towards reuse has not been adopted. In addition, new demands are continually being placed upon the FSW which means the FSB must become more adept at providing GNC FSW functionality's core so it can accommodate additional requirements. These domain features together with engineering concepts are influencing the specification, description and evaluation of FSW product line. Domain engineering is the foundation for emerging product line software development approaches. A product line is 'A family of products designed to take advantage of their common aspects and predicted variabilities'. In our product line approach, domain engineering includes the engineering activities needed to produce reusable artifacts for a domain. Application engineering refers to developing an application in the domain starting from reusable artifacts. The focus of this paper is regarding the software process, lessons learned and on how the GNC FSW product line manages variability. Existing domain engineering approaches do not enforce any specific notation for domain analysis or commonality and variability analysis. Usually, natural language text is the preferred tool. The advantage is the flexibility and adapt ability of natural language. However, one has to be ready to accept also its well-known drawbacks, such as ambiguity, inconsistency, and contradictions. While most domain analysis approaches are functionally oriented, the idea of applying the object-oriented approach in domain analysis is not new. Some authors propose to use UML as the notation underlying domain analysis. Our work is based on the same idea of merging UML and domain analysis. Further, we propose a few extensions to UML in order to express variability, and we define precisely their semantics so that a tool can support them. The extensions are designed to be implemented on the API of a popular industrial CASE tool, with obvious advantages in cost and availability of tool support. The paper outlines the product line processes and identifies where variability must be addressed. Then it describes the product line products with respect to how they accommodate variability. The Celestial Body subdomain is used as a working example. Our results to date are summarized and plans for the future are described.

  17. Adapting a Computerized Medical Dictation System to Prepare Academic Papers in Radiology.

    PubMed

    Sánchez, Yadiel; Prabhakar, Anand M; Uppot, Raul N

    2017-09-14

    Everyday radiologists use dictation software to compose clinical reports of imaging findings. The dictation software is tailored for medical use and to the speech pattern of each radiologist. Over the past 10 years we have used dictation software to compose academic manuscripts, correspondence letters, and texts of educational exhibits. The advantages of using voice dictation is faster composition of manuscripts. However, use of such software requires preparation. The purpose of this article is to review the steps of adapting a clinical dictation software for dictating academic manuscripts and detail the advantages and limitations of this technique. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  19. Toward Software Both Seen and Heard.

    ERIC Educational Resources Information Center

    Lazzaro, Joseph J.

    1996-01-01

    Visually impaired users are hampered by current PC software written for graphical user interfaces. Screen readers that vocalize displayed text require standardization that remains missing in the programming industry; the readers cannot interpret many cues in the Windows environment. More programming standards and adaptive technology for computers…

  20. A Constructivist Approach to E-Text Design for Use in Undergraduate Physiology Courses

    ERIC Educational Resources Information Center

    Rhodes, Ashley E.; Rozell, Timothy G.

    2015-01-01

    Electronic textbooks, or e-texts, will have an increasingly important role in college science courses within the next few years due to the rising costs of traditional texts and the increasing availability of software allowing instructors to create their own e-text. However, few guidelines exist in the literature to aid instructors in the…

  1. Automated toxicological screening reports of modified Agilent MSD Chemstation combined with Microsoft Visual Basic application programs.

    PubMed

    Choe, Sanggil; Kim, Suncheun; Choi, Hyeyoung; Choi, Hwakyoung; Chung, Heesun; Hwang, Bangyeon

    2010-06-15

    Agilent GC-MS MSD Chemstation offers automated library search report for toxicological screening using total ion chromatogram (TIC) and mass spectroscopy in normal mode. Numerous peaks appear in the chromatogram of biological specimen such as blood or urine and often large migrating peaks obscure small target peaks, in addition, any target peaks of low abundance regularly give wrong library search result or low matching score. As a result, retention time and mass spectrum of all the peaks in the chromatogram have to be checked to see if they are relevant. These repeated actions are very tedious and time-consuming to toxicologists. MSD Chemstation software operates using a number of macro files which give commands and instructions on how to work on and extract data from the chromatogram and spectroscopy. These macro files are developed by the own compiler of the software. All the original macro files can be modified and new macro files can be added to the original software by users. To get more accurate results with more convenient method and to save time for data analysis, we developed new macro files for reports generation and inserted new menus in the Enhanced Data Analysis program. Toxicological screening reports generated by these new macro files are in text mode or graphic mode and these reports can be generated with three different automated subtraction options. Text reports have Brief mode and Full mode and graphic reports have the option with or without mass spectrum mode. Matched mass spectrum and matching score for detected compounds are printed in reports by modified library searching modules. We have also developed an independent application program named DrugMan. This program manages drug groups, lists and parameters that are in use in MSD Chemstation. The incorporation of DrugMan with modified macro modules provides a powerful tool for toxicological screening and save a lot of valuable time on toxicological work. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Use of sentiment analysis for capturing patient experience from free-text comments posted online.

    PubMed

    Greaves, Felix; Ramirez-Cano, Daniel; Millett, Christopher; Darzi, Ara; Donaldson, Liam

    2013-11-01

    There are large amounts of unstructured, free-text information about quality of health care available on the Internet in blogs, social networks, and on physician rating websites that are not captured in a systematic way. New analytical techniques, such as sentiment analysis, may allow us to understand and use this information more effectively to improve the quality of health care. We attempted to use machine learning to understand patients' unstructured comments about their care. We used sentiment analysis techniques to categorize online free-text comments by patients as either positive or negative descriptions of their health care. We tried to automatically predict whether a patient would recommend a hospital, whether the hospital was clean, and whether they were treated with dignity from their free-text description, compared to the patient's own quantitative rating of their care. We applied machine learning techniques to all 6412 online comments about hospitals on the English National Health Service website in 2010 using Weka data-mining software. We also compared the results obtained from sentiment analysis with the paper-based national inpatient survey results at the hospital level using Spearman rank correlation for all 161 acute adult hospital trusts in England. There was 81%, 84%, and 89% agreement between quantitative ratings of care and those derived from free-text comments using sentiment analysis for cleanliness, being treated with dignity, and overall recommendation of hospital respectively (kappa scores: .40-.74, P<.001 for all). We observed mild to moderate associations between our machine learning predictions and responses to the large patient survey for the three categories examined (Spearman rho 0.37-0.51, P<.001 for all). The prediction accuracy that we have achieved using this machine learning process suggests that we are able to predict, from free-text, a reasonably accurate assessment of patients' opinion about different performance aspects of a hospital and that these machine learning predictions are associated with results of more conventional surveys.

  3. Using software security analysis to verify the secure socket layer (SSL) protocol

    NASA Technical Reports Server (NTRS)

    Powell, John D.

    2004-01-01

    nal Aeronautics and Space Administration (NASA) have tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information the3, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach '' offers, among its capabilities, formal verification of software security properties, through the use of model based verification (MBV) to address software security risks. [1,2,3,4,5,6] MBV is a formal approach to software assurance that combines analysis of software, via abstract models, with technology, such as model checkers, that provide automation of the mechanical portions of the analysis process. This paper will discuss: The need for formal analysis to assure software systems with respect to software and why testing alone cannot provide it. The means by which MBV with a Flexible Modeling Framework (FMF) accomplishes the necessary analysis task. An example of FMF style MBV in the verification of properties over the Secure Socket Layer (SSL) communication protocol as a demonstration.

  4. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2003-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated

  5. A tool to include gamma analysis software into a quality assurance program.

    PubMed

    Agnew, Christina E; McGarry, Conor K

    2016-03-01

    To provide a tool to enable gamma analysis software algorithms to be included in a quality assurance (QA) program. Four image sets were created comprising two geometric images to independently test the distance to agreement (DTA) and dose difference (DD) elements of the gamma algorithm, a clinical step and shoot IMRT field and a clinical VMAT arc. The images were analysed using global and local gamma analysis with 2 in-house and 8 commercially available software encompassing 15 software versions. The effect of image resolution on gamma pass rates was also investigated. All but one software accurately calculated the gamma passing rate for the geometric images. Variation in global gamma passing rates of 1% at 3%/3mm and over 2% at 1%/1mm was measured between software and software versions with analysis of appropriately sampled images. This study provides a suite of test images and the gamma pass rates achieved for a selection of commercially available software. This image suite will enable validation of gamma analysis software within a QA program and provide a frame of reference by which to compare results reported in the literature from various manufacturers and software versions. Copyright © 2015. Published by Elsevier Ireland Ltd.

  6. The Role of Data Analysis Software in Graduate Programs in Education and Post-Graduate Research

    ERIC Educational Resources Information Center

    Harwell, Michael

    2018-01-01

    The importance of data analysis software in graduate programs in education and post-graduate educational research is self-evident. However the role of this software in facilitating supererogated statistical practice versus "cookbookery" is unclear. The need to rigorously document the role of data analysis software in students' graduate…

  7. RNAstructure: software for RNA secondary structure prediction and analysis.

    PubMed

    Reuter, Jessica S; Mathews, David H

    2010-03-15

    To understand an RNA sequence's mechanism of action, the structure must be known. Furthermore, target RNA structure is an important consideration in the design of small interfering RNAs and antisense DNA oligonucleotides. RNA secondary structure prediction, using thermodynamics, can be used to develop hypotheses about the structure of an RNA sequence. RNAstructure is a software package for RNA secondary structure prediction and analysis. It uses thermodynamics and utilizes the most recent set of nearest neighbor parameters from the Turner group. It includes methods for secondary structure prediction (using several algorithms), prediction of base pair probabilities, bimolecular structure prediction, and prediction of a structure common to two sequences. This contribution describes new extensions to the package, including a library of C++ classes for incorporation into other programs, a user-friendly graphical user interface written in JAVA, and new Unix-style text interfaces. The original graphical user interface for Microsoft Windows is still maintained. The extensions to RNAstructure serve to make RNA secondary structure prediction user-friendly. The package is available for download from the Mathews lab homepage at http://rna.urmc.rochester.edu/RNAstructure.html.

  8. Argo: an integrative, interactive, text mining-based workbench supporting curation

    PubMed Central

    Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia

    2012-01-01

    Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844

  9. What's New in Software? Hot New Tool: The Hypertext.

    ERIC Educational Resources Information Center

    Hedley, Carolyn N.

    1989-01-01

    This article surveys recent developments in hypertext software, a highly interactive nonsequential reading/writing/database approach to research and teaching that allows paths to be created through related materials including text, graphics, video, and animation sources. Described are uses, advantages, and problems of hypertext. (PB)

  10. Description of the IV + V System Software Package.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management: An International Journal for Library and Information Services, 1984

    1984-01-01

    Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…

  11. Usability study of clinical exome analysis software: top lessons learned and recommendations.

    PubMed

    Shyr, Casper; Kushniruk, Andre; Wasserman, Wyeth W

    2014-10-01

    New DNA sequencing technologies have revolutionized the search for genetic disruptions. Targeted sequencing of all protein coding regions of the genome, called exome analysis, is actively used in research-oriented genetics clinics, with the transition to exomes as a standard procedure underway. This transition is challenging; identification of potentially causal mutation(s) amongst ∼10(6) variants requires specialized computation in combination with expert assessment. This study analyzes the usability of user interfaces for clinical exome analysis software. There are two study objectives: (1) To ascertain the key features of successful user interfaces for clinical exome analysis software based on the perspective of expert clinical geneticists, (2) To assess user-system interactions in order to reveal strengths and weaknesses of existing software, inform future design, and accelerate the clinical uptake of exome analysis. Surveys, interviews, and cognitive task analysis were performed for the assessment of two next-generation exome sequence analysis software packages. The subjects included ten clinical geneticists who interacted with the software packages using the "think aloud" method. Subjects' interactions with the software were recorded in their clinical office within an urban research and teaching hospital. All major user interface events (from the user interactions with the packages) were time-stamped and annotated with coding categories to identify usability issues in order to characterize desired features and deficiencies in the user experience. We detected 193 usability issues, the majority of which concern interface layout and navigation, and the resolution of reports. Our study highlights gaps in specific software features typical within exome analysis. The clinicians perform best when the flow of the system is structured into well-defined yet customizable layers for incorporation within the clinical workflow. The results highlight opportunities to dramatically accelerate clinician analysis and interpretation of patient genomic data. We present the first application of usability methods to evaluate software interfaces in the context of exome analysis. Our results highlight how the study of user responses can lead to identification of usability issues and challenges and reveal software reengineering opportunities for improving clinical next-generation sequencing analysis. While the evaluation focused on two distinctive software tools, the results are general and should inform active and future software development for genome analysis software. As large-scale genome analysis becomes increasingly common in healthcare, it is critical that efficient and effective software interfaces are provided to accelerate clinical adoption of the technology. Implications for improved design of such applications are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  13. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  14. A Visualization-Based Tutoring Tool for Engineering Education

    NASA Astrophysics Data System (ADS)

    Nguyen, Tang-Hung; Khoo, I.-Hung

    2010-06-01

    In engineering disciplines, students usually have hard time to visualize different aspects of engineering analysis and design, which inherently are too complex or abstract to fully understand without the aid of visual explanations or visualizations. As examples, when learning materials and sequences of construction process, students need to visualize how all components of a constructed facility are assembled? Such visualization can not be achieved in a textbook and a traditional lecturing environment. In this paper, the authors present the development of a computer tutoring software, in which different visualization tools including video clips, 3 dimensional models, drawings, pictures/photos together with complementary texts are used to assist students in deeply understanding and effectively mastering materials. The paper will also discuss the implementation and the effectiveness evaluation of the proposed tutoring software, which was used to teach a construction engineering management course offered at California State University, Long Beach.

  15. Robust binarization of degraded document images using heuristics

    NASA Astrophysics Data System (ADS)

    Parker, Jon; Frieder, Ophir; Frieder, Gideon

    2013-12-01

    Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.

  16. GIDEP Batching Tool

    NASA Technical Reports Server (NTRS)

    Fong, Danny; Odell,Dorice; Barry, Peter; Abrahamian, Tomik

    2008-01-01

    This software provides internal, automated search mechanics of GIDEP (Government- Industry Data Exchange Program) Alert data imported from the GIDEP government Web site. The batching tool allows the import of a single parts list in tab-delimited text format into the local JPL GIDEP database. Delimiters from every part number are removed. The original part numbers with delimiters are compared, as well as the newly generated list without the delimiters. The two lists run against the GIDEP imports, and output any matches. This feature only works with Netscape 2.0 or greater, or Internet Explorer 4.0 or greater. The user selects the browser button to choose a text file to import. When the submit button is pressed, this script will import alerts from the text file into the local JPL GIDEP database. This batch tool provides complete in-house control over exported material and data for automated batch match abilities. The batching tool has the ability to match capabilities of the parts list to tables, and yields results that aid further research and analysis. This provides more control over GIDEP information for metrics and reports information not provided by the government site. This software yields results quickly and gives more control over external data from the government site in order to generate other reports not available from the external source. There is enough space to store years of data. The program relates to risk identification and management with regard to projects and GIDEP alert information encompassing flight parts for space exploration.

  17. Proceedings of the 14th Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Several software related topics are presented. Topics covered include studies and experiment at the Software Engineering Laboratory at the Goddard Space Flight Center, predicting project success from the Software Project Management Process, software environments, testing in a reuse environment, domain directed reuse, and classification tree analysis using the Amadeus measurement and empirical analysis.

  18. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  19. Supporting geoscience with graphical-user-interface Internet tools for the Macintosh

    NASA Astrophysics Data System (ADS)

    Robin, Bernard

    1995-07-01

    This paper describes a suite of Macintosh graphical-user-interface (GUI) software programs that can be used in conjunction with the Internet to support geoscience education. These software programs allow science educators to access and retrieve a large body of resources from an increasing number of network sites, taking advantage of the intuitive, simple-to-use Macintosh operating system. With these tools, educators easily can locate, download, and exchange not only text files but also sound resources, video movie clips, and software application files from their desktop computers. Another major advantage of these software tools is that they are available at no cost and may be distributed freely. The following GUI software tools are described including examples of how they can be used in an educational setting: ∗ Eudora—an e-mail program ∗ NewsWatcher—a newsreader ∗ TurboGopher—a Gopher program ∗ Fetch—a software application for easy File Transfer Protocol (FTP) ∗ NCSA Mosaic—a worldwide hypertext browsing program. An explosive growth of online archives currently is underway as new electronic sites are being added continuously to the Internet. Many of these resources may be of interest to science educators who learn they can share not only ASCII text files, but also graphic image files, sound resources, QuickTime movie clips, and hypermedia projects with colleagues from locations around the world. These powerful, yet simple to learn GUI software tools are providing a revolution in how knowledge can be accessed, retrieved, and shared.

  20. User-driven integrated software lives: ``Paleomag'' paleomagnetics analysis on the Macintosh

    NASA Astrophysics Data System (ADS)

    Jones, Craig H.

    2002-12-01

    "PaleoMag," a paleomagnetics analysis package originally developed for the Macintosh operating system in 1988, allows examination of demagnetization of individual samples and analysis of directional data from collections of samples. Prior to recent reinvigorated development of the software for both Macintosh and Windows, it was widely used despite not running properly on machines and operating systems sold after 1995. This somewhat surprising situation demonstrates that there is a continued need for integrated analysis software within the earth sciences, in addition to well-developed scripting and batch-mode software. One distinct advantage of software like PaleoMag is in the ability to combine quality control with analysis within a unique graphical environment. Because such demands are frequent within the earth sciences, means of nurturing the development of similar software should be found.

  1. Teacher's Guide to SERAPHIM Software I. Chemistry: Experimental Foundations.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the first in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: Experimental Foundations." Program suggestions are…

  2. Teacher's Guide to SERAPHIM Software II. Chemical Principles.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the second in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemical Principles." Program suggestions are arranged in the…

  3. Apples in the Apple Library--How One Library Took a Byte.

    ERIC Educational Resources Information Center

    Ertel, Monica

    1983-01-01

    Summarizes automation of a specialized library at Apple Computer, Inc., describing software packages chosen for the following functions: word processing/text editing; cataloging and circulation; reference; and in-house databases. Examples of each function and additional sources of information on software and equipment mentioned in the article are…

  4. Use of Writing with Symbols 2000 Software to Facilitate Emergent Literacy Development

    ERIC Educational Resources Information Center

    Parette, Howard P.; Boeckmann, Nichole M.; Hourcade, Jack J.

    2008-01-01

    This paper outlines the use of the "Writing with Symbols 2000" software to facilitate emergent literacy development. The program's use of pictures incorporated with text has great potential to help young children with and without disabilities acquire fundamental literacy concepts about print, phonemic awareness, alphabetic principle, vocabulary…

  5. Nonfiction Reading Comprehension in Middle School: Exploring in Interactive Software Approach

    ERIC Educational Resources Information Center

    Wolff, Evelyn S.; Isecke, Harriet; Rhoads, Christopher; Madura, John P.

    2013-01-01

    The struggles of students in the United States to comprehend non-fiction science text are well documented. Middle school students, in particular, have minimal instruction in comprehending nonfiction and flounder on assessments. This article describes the development process of the Readorium software, an interactive web-based program being…

  6. 75 FR 80677 - The Low-Income Definition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-23

    ... original regulatory text so it is consistent with the geo-coding software the agency uses to make the low... Union Act (Act) authorizes the NCUA Board (Board) to define ``low-income members'' so that credit unions... process of implementing geo- coding software to make the calculation automatically for credit unions...

  7. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  8. Two-step web-mining approach to study geology/geophysics-related open-source software projects

    NASA Astrophysics Data System (ADS)

    Behrends, Knut; Conze, Ronald

    2013-04-01

    Geology/geophysics is a highly interdisciplinary science, overlapping with, for instance, physics, biology and chemistry. In today's software-intensive work environments, geoscientists often encounter new open-source software from scientific fields that are only remotely related to the own field of expertise. We show how web-mining techniques can help to carry out systematic discovery and evaluation of such software. In a first step, we downloaded ~500 abstracts (each consisting of ~1 kb UTF-8 text) from agu-fm12.abstractcentral.com. This web site hosts the abstracts of all publications presented at AGU Fall Meeting 2012, the world's largest annual geology/geophysics conference. All abstracts belonged to the category "Earth and Space Science Informatics", an interdisciplinary label cross-cutting many disciplines such as "deep biosphere", "atmospheric research", and "mineral physics". Each publication was represented by a highly structured record with ~20 short data attributes, the largest authorship-record being the unstructured "abstract" field. We processed texts of the abstracts with the statistics software "R" to calculate a corpus and a term-document matrix. Using R package "tm", we applied text-mining techniques to filter data and develop hypotheses about software-development activities happening in various geology/geophysics fields. Analyzing the term-document matrix with basic techniques (e.g., word frequencies, co-occurences, weighting) as well as more complex methods (clustering, classification) several key pieces of information were extracted. For example, text-mining can be used to identify scientists who are also developers of open-source scientific software, and the names of their programming projects and codes can also be identified. In a second step, based on the intermediate results found by processing the conference-abstracts, any new hypotheses can be tested in another webmining subproject: by merging the dataset with open data from github.com and stackoverflow.com. These popular, developer-centric websites have powerful application-programmer interfaces, and follow an open-data policy. In this regard, these sites offer a web-accessible reservoir of information that can be tapped to study questions such as: which open source software projects are eminent in the various geoscience fields? What are the most popular programming languages? How are they trending? Are there any interesting temporal patterns in committer activities? How large are programming teams and how do they change over time? What free software packages exist in the vast realms of related fields? Does the software from these fields have capabilities that might still be useful to me as a researcher, or can help me perform my work better? Are there any open-source projects that might be commercially interesting? This evaluation strategy reveals programming projects that tend to be new. As many important legacy codes are not hosted on open-source code-repositories, the presented search method might overlook some older projects.

  9. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    PubMed

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  10. Semantic Metrics for Analysis of Software

    NASA Technical Reports Server (NTRS)

    Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara

    2005-01-01

    A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.

  11. Integrating query of relational and textual data in clinical databases: a case study.

    PubMed

    Fisk, John M; Mutalik, Pradeep; Levin, Forrest W; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Results are relevance-ranked using either "total documents per patient" or "report type weighting." Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers.

  12. Adolescent Female Text Messaging Preferences to Prevent Pregnancy After an Emergency Department Visit: A Qualitative Analysis.

    PubMed

    Chernick, Lauren Stephanie; Schnall, Rebecca; Stockwell, Melissa S; Castaño, Paula M; Higgins, Tracy; Westhoff, Carolyn; Santelli, John; Dayan, Peter S

    2016-09-29

    Over 15 million adolescents use the emergency department (ED) each year in the United States. Adolescent females who use the ED for medical care have been found to be at high risk for unintended pregnancy. Given that adolescents represent the largest users of text messaging and are receptive to receiving text messages related to their sexual health, the ED visit represents an opportunity for intervention. The aim of this qualitative study was to explore interest in and preferences for the content, frequency, and timing of an ED-based text message intervention to prevent pregnancy for adolescent females. We conducted semistructured, open-ended interviews in one urban ED in the United States with adolescent females aged 14-19 years. Eligible subjects were adolescents who were sexually active in the past 3 months, presented to the ED for a reproductive health complaint, owned a mobile phone, and did not use effective contraception. Using an interview guide, enrollment continued until saturation of key themes. The investigators designed sample text messages using the Health Beliefs Model and participants viewed these on a mobile phone. The team recorded, transcribed, and coded interviews based on thematic analysis using the qualitative analysis software NVivo and Excel. Participants (n=14) were predominantly Hispanic (13/14; 93%), insured (13/14; 93%), ED users in the past year (12/14; 86%), and frequent text users (10/14; 71% had sent or received >30 texts per day). All were interested in receiving text messages from the ED about pregnancy prevention, favoring messages that were "brief," "professional," and "nonaccusatory." Respondents favored texts with links to websites, repeated information regarding places to receive "confidential" care, and focused information on contraception options and misconceptions. Preferences for text message frequency varied from daily to monthly, with random hours of delivery to maintain "surprise." No participant feared that text messages would violate her privacy. Adolescent female patients at high pregnancy risk are interested in ED-based pregnancy prevention provided by texting. Understanding preferences for the content, frequency, and timing of messages can guide in designing future interventions in the ED.

  13. SeeGH--a software tool for visualization of whole genome array comparative genomic hybridization data.

    PubMed

    Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L

    2004-02-09

    Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.

  14. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  15. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  16. Keyless Entry: Building a Text Database Using OCR Technology.

    ERIC Educational Resources Information Center

    Grotophorst, Clyde W.

    1989-01-01

    Discusses the use of optical character recognition (OCR) technology to produce an ASCII text database. A tutorial on digital scanning and OCR is provided, and a systems integration project which used the Calera CDP-3000XF scanner and text retrieval software to construct a database of dissertations at George Mason University is described. (four…

  17. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  18. Text, Graphics, and Multimedia Materials Employed in Learning a Computer-Based Procedural Task

    ERIC Educational Resources Information Center

    Coffindaffer, Kari Christine Carlson

    2010-01-01

    The present research study investigated the interaction of graphic design students with different forms of software training materials. Four versions of the procedural task instructions were developed (A) Traditional Textbook with Still Images, (B) Modified Text with Integrated Still Images, (C) Onscreen Modified Text with Silent Onscreen Video…

  19. An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development

    PubMed Central

    Knauff, Markus; Nejasmic, Jelica

    2014-01-01

    The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors. PMID:25526083

  20. Online self-expression and experimentation as 'reflectivism': Using text analytics to examine the participatory forum Hello Sunday Morning.

    PubMed

    Carah, Nicholas; Meurk, Carla; Angus, Daniel

    2017-03-01

    Hello Sunday Morning is an online health promotion organisation that began in 2009. Hello Sunday Morning asks participants to stop consuming alcohol for a period of time, set a goal and document their progress on a personal blog. Hello Sunday Morning is a unique health intervention for three interrelated reasons: (1) it was generated outside a clinical setting, (2) it uses new media technologies to create structured forms of participation in an iterative and open-ended way and (3) participants generate a written record of their progress along with demographic, behavioural and engagement data. This article presents a text analysis of the blog posts of Hello Sunday Morning participants using the software program Leximancer. Analysis of blogs illustrates how participants' expressions change over time. In the first month, participants tended to set goals, describe their current drinking practices in individual and cultural terms, express hopes and anxieties and report on early efforts to change. After month 1, participants continued to report on efforts to change and associated challenges and reflect on their place as individuals in a drinking culture. In addition to this, participants evaluated their efforts to change and presented their 'findings' and 'theorised' them to provide advice for others. We contextualise this text analysis with respect to Hello Sunday Morning's development of more structured forms of online participation. We offer a critical appraisal of the value of text analytics in the development of online health interventions.

  1. National survey on dose data analysis in computed tomography.

    PubMed

    Heilmaier, Christina; Treier, Reto; Merkle, Elmar Max; Alkhadi, Hatem; Weishaupt, Dominik; Schindera, Sebastian

    2018-05-28

    A nationwide survey was performed assessing current practice of dose data analysis in computed tomography (CT). All radiological departments in Switzerland were asked to participate in the on-line survey composed of 19 questions (16 multiple choice, 3 free text). It consisted of four sections: (1) general information on the department, (2) dose data analysis, (3) use of a dose management software (DMS) and (4) radiation protection activities. In total, 152 out of 241 Swiss radiological departments filled in the whole questionnaire (return rate, 63%). Seventy-nine per cent of the departments (n = 120/152) analyse dose data on a regular basis with considerable heterogeneity in the frequency (1-2 times per year, 45%, n = 54/120; every month, 35%, n = 42/120) and method of analysis. Manual analysis is carried out by 58% (n = 70/120) compared with 42% (n = 50/120) of departments using a DMS. Purchase of a DMS is planned by 43% (n = 30/70) of the departments with manual analysis. Real-time analysis of dose data is performed by 42% (n = 21/50) of the departments with a DMS; however, residents can access the DMS in clinical routine only in 20% (n = 10/50) of the departments. An interdisciplinary dose team, which among other things communicates dose data internally (63%, n = 76/120) and externally, is already implemented in 57% (n = 68/120) departments. Swiss radiological departments are committed to radiation safety. However, there is high heterogeneity among them regarding the frequency and method of dose data analysis as well as the use of DMS and radiation protection activities. • Swiss radiological departments are committed to and interest in radiation safety as proven by a 63% return rate of the survey. • Seventy-nine per cent of departments analyse dose data on a regular basis with differences in the frequency and method of analysis: 42% use a dose management software, while 58% currently perform manual dose data analysis. Of the latter, 43% plan to buy a dose management software. • Currently, only 25% of the departments add radiation exposure data to the final CT report.

  2. An online database for plant image analysis software tools.

    PubMed

    Lobet, Guillaume; Draye, Xavier; Périlleux, Claire

    2013-10-09

    Recent years have seen an increase in methods for plant phenotyping using image analyses. These methods require new software solutions for data extraction and treatment. These solutions are instrumental in supporting various research pipelines, ranging from the localisation of cellular compounds to the quantification of tree canopies. However, due to the variety of existing tools and the lack of central repository, it is challenging for researchers to identify the software that is best suited for their research. We present an online, manually curated, database referencing more than 90 plant image analysis software solutions. The website, plant-image-analysis.org, presents each software in a uniform and concise manner enabling users to identify the available solutions for their experimental needs. The website also enables user feedback, evaluations and new software submissions. The plant-image-analysis.org database provides an overview of existing plant image analysis software. The aim of such a toolbox is to help users to find solutions, and to provide developers a way to exchange and communicate about their work.

  3. GWAMA: software for genome-wide association meta-analysis.

    PubMed

    Mägi, Reedik; Morris, Andrew P

    2010-05-28

    Despite the recent success of genome-wide association studies in identifying novel loci contributing effects to complex human traits, such as type 2 diabetes and obesity, much of the genetic component of variation in these phenotypes remains unexplained. One way to improving power to detect further novel loci is through meta-analysis of studies from the same population, increasing the sample size over any individual study. Although statistical software analysis packages incorporate routines for meta-analysis, they are ill equipped to meet the challenges of the scale and complexity of data generated in genome-wide association studies. We have developed flexible, open-source software for the meta-analysis of genome-wide association studies. The software incorporates a variety of error trapping facilities, and provides a range of meta-analysis summary statistics. The software is distributed with scripts that allow simple formatting of files containing the results of each association study and generate graphical summaries of genome-wide meta-analysis results. The GWAMA (Genome-Wide Association Meta-Analysis) software has been developed to perform meta-analysis of summary statistics generated from genome-wide association studies of dichotomous phenotypes or quantitative traits. Software with source files, documentation and example data files are freely available online at http://www.well.ox.ac.uk/GWAMA.

  4. Software Engineering Improvement Activities/Plan

    NASA Technical Reports Server (NTRS)

    2003-01-01

    bd Systems personnel accomplished the technical responsibilities for this reporting period, as planned. A close working relationship was maintained with personnel of the MSFC Avionics Department Software Group (ED14). Work accomplishments included development, evaluation, and enhancement of a software cost model, performing literature search and evaluation of software tools available for code analysis and requirements analysis, and participating in other relevant software engineering activities. Monthly reports were submitted. This support was provided to the Flight Software Group/ED 1 4 in accomplishing the software engineering improvement engineering activities of the Marshall Space Flight Center (MSFC) Software Engineering Improvement Plan.

  5. Applications of Text Analytics in the Intelligence Community

    DTIC Science & Technology

    2016-06-01

    TEXT ANALYTICS IN THE INTELLIGENCE COMMUNITY by Daniel M. Hall June 2016 Thesis Advisor: Johannes O. Royset Second Reader: Jon Alt THIS...Master’s thesis 4. TITLE AND SUBTITLE APPLICATIONS OF TEXT ANALYTICS IN THE INTELLIGENCE COMMUNITY 5. FUNDING NUMBERS 6. AUTHOR(S) Daniel M...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) We evaluate Anseri, a commercial text analytics software, and its ability to assist a

  6. Investigation into Text Classification With Kernel Based Schemes

    DTIC Science & Technology

    2010-03-01

    Document Matrix TDMs Term-Document Matrices TMG Text to Matrix Generator TN True Negative TP True Positive VSM Vector Space Model xxii THIS PAGE...are represented as a term-document matrix, common evaluation metrics, and the software package Text to Matrix Generator ( TMG ). The classifier...AND METRICS This chapter introduces the indexing capabilities of the Text to Matrix Generator ( TMG ) Toolbox. Specific attention is placed on the

  7. Validating the Effectiveness of Switching the Vancomycin TDM Analysis Software Based on the Predictive Accuracy.

    PubMed

    Imai, Shungo; Yamada, Takehiro; Ishiguro, Nobuhisa; Miyamoto, Takenori; Kagami, Keisuke; Tomiyama, Naoki; Niinuma, Yusuke; Nagasaki, Daisuke; Suzuki, Koji; Yamagami, Akira; Kasashi, Kumiko; Kobayashi, Masaki; Iseki, Ken

    2017-01-01

    Based on the predictive performance in our previous study, we switched the therapeutic drug monitoring (TDM) analysis software for dose setting of vancomycin (VCM) from "Vancomycin MEEK TDM analysis software Ver2.0" (MEEK) to "SHIONOGI-VCM-TDM ver.2009" (VCM-TDM) in January 2015. In the present study, our aim was to validate the effectiveness of the changing VCM TDM analysis software in initial dose setting of VCM. The enrolled patients were divided into two groups, each having 162 patients in total, who received VCM with the initial dose set using MEEK (MEEK group) or VCM-TDM (VCM-TDM group). We compared the rates of attaining the therapeutic range (trough value; 10-20 μg/mL) of serum VCM concentration between the groups. Multivariate logistic regression analysis was performed to confirm that changing the VCM TDM analysis software was an independent factor related to attaining the therapeutic range. Switching the VCM TDM analysis software from MEEK to VCM-TDM improved the rate of attaining the therapeutic range by 21.6% (MEEK group: 42.6% vs. VCM-TDM group: 64.2%, p<0.01). Patient age ≥65 years, concomitant medication (furosemide) and the TDM analysis software used VCM-TDM were considered to be independent factors for attaining the therapeutic range. These results demonstrated the effectiveness of switching the VCM TDM analysis software from MEEK to VCM-TDM for initial dose setting of VCM.

  8. Teacher's Guide to SERAPHIM Software III. Modern Chemistry.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the third in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Modern Chemistry." Program suggestions are arranged in the same…

  9. Teacher's Guide to SERAPHIM Software IV Chemistry: A Modern Course.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the fourth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: A Modern Course." Program suggestions are arranged…

  10. Teacher's Guide to SERAPHIM Software VI. Chemistry: The Study of Matter.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the sixth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: The Study of Matter." Program suggestions are…

  11. Teacher's Guide to SERAPHIM Software V. Chemistry: The Central Science.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the fifth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: The Central Science." Program suggestions are…

  12. Application of Plagiarism Screening Software in the Chemical Engineering Curriculum

    ERIC Educational Resources Information Center

    Cooper, Matthew E.; Bullard, Lisa G.

    2014-01-01

    Plagiarism is an area of increasing concern for written ChE assignments, such as laboratory and design reports, due to ease of access to text and other materials via the internet. This study examines the application of plagiarism screening software to four courses in a university chemical engineering curriculum. The effectiveness of plagiarism…

  13. Developing ICALL Tools Using GATE

    ERIC Educational Resources Information Center

    Wood, Peter

    2008-01-01

    This article discusses the use of the General Architecture for Text Engineering (GATE) as a tool for the development of ICALL and NLP applications. It outlines a paradigm shift in software development, which is mainly influenced by projects such as the Free Software Foundation. It looks at standards that have been proposed to facilitate the…

  14. Windows VPN Set Up | High-Performance Computing | NREL

    Science.gov Websites

    it in your My Documents folder Configure the client software using that conf file Start the TEXT NEEDED Configure the Client Software Start the Endian Connect App. You'll configure the connection using the hpcvpn-win.conf file, uncheck the "save password" link, and add your UserID. Start

  15. Computerized trainings in four groups of struggling readers: Specific effects on word reading and comprehension.

    PubMed

    Potocki, Anna; Magnan, Annie; Ecalle, Jean

    2015-01-01

    Four groups of poor readers were identified among a population of students with learning disabilities attending a special class in secondary school: normal readers; specific poor decoders; specific poor comprehenders, and general poor readers (deficits in both decoding and comprehension). These students were then trained with a software program designed to encourage either their word decoding skills or their text comprehension skills. After 5 weeks of training, we observed that the students experiencing word reading deficits and trained with the decoding software improved primarily in the reading fluency task while those exhibiting comprehension deficits and trained with the comprehension software showed improved performance in listening and reading comprehension. But interestingly, the latter software also led to improved performance on the word recognition task. This result suggests that, for these students, training interventions focused at the text level and its comprehension might be more beneficial for reading in general (i.e., for the two components of reading) than word-level decoding trainings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Development of a New VLBI Data Analysis Software

    NASA Technical Reports Server (NTRS)

    Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.

    2010-01-01

    We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.

  17. Program for Editing Spacecraft Command Sequences

    NASA Technical Reports Server (NTRS)

    Gladden, Roy; Waggoner, Bruce; Kordon, Mark; Hashemi, Mahnaz; Hanks, David; Salcedo, Jose

    2006-01-01

    Sequence Translator, Editor, and Expander Resource (STEER) is a computer program that facilitates construction of sequences and blocks of sequences (hereafter denoted generally as sequence products) for commanding a spacecraft. STEER also provides mechanisms for translating among various sequence product types and quickly expanding activities of a given sequence in chronological order for review and analysis of the sequence. To date, construction of sequence products has generally been done by use of such clumsy mechanisms as text-editor programs, translating among sequence product types has been challenging, and expanding sequences to time-ordered lists has involved arduous processes of converting sequence products to "real" sequences and running them through Class-A software (defined, loosely, as flight and ground software critical to a spacecraft mission). Also, heretofore, generating sequence products in standard formats has been troublesome because precise formatting and syntax are required. STEER alleviates these issues by providing a graphical user interface containing intuitive fields in which the user can enter the necessary information. The STEER expansion function provides a "quick and dirty" means of seeing how a sequence and sequence block would expand into a chronological list, without need to use of Class-A software.

  18. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2002-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...and global water column. 1 OBJECTIVES The project’s objective is to develop automated image analysis software to reduce the effort and time

  19. Computerized content analysis of some adolescent writings of Napoleon Bonaparte: a test of the validity of the method.

    PubMed

    Gottschalk, Louis A; DeFrancisco, Don; Bechtel, Robert J

    2002-08-01

    The aim of this study was to test the validity of a computer software program previously demonstrated to be capable of making DSM-IV neuropsychiatric diagnoses from the content analysis of speech or verbal texts. In this report, the computer program was applied to three personal writings of Napoleon Bonaparte when he was 12 to 16 years of age. The accuracy of the neuropsychiatric evaluations derived from the computerized content analysis of these writings of Napoleon was independently corroborated by two biographers who have described pertinent details concerning his life situations, moods, and other emotional reactions during this adolescent period of his life. The relevance of this type of computer technology to psychohistorical research and clinical psychiatry is suggested.

  20. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2001-01-01

    In this contract, which is a component of a larger contract that we plan to submit in the coming months, we plan to study the preprocessing issues which arise in applying natural language processing techniques to NASA-KSC problem reports. The goals of this work will be to deal with the issues of: a) automatically obtaining the problem reports from NASA-KSC data bases, b) the format of these reports and c) the conversion of these reports to a format that will be adequate for our natural language software. At the end of this contract, we expect that these problems will be solved and that we will be ready to apply our natural language software to a text database of over 1000 KSC problem reports.

  1. U.S. Geological Survey National Computer Technology Meeting: Program and Abstracts, Norfolk, Virginia, May 17-22, 1992

    DTIC Science & Technology

    1992-05-01

    formats, and character formats that can easily integrate graphics and text into one document. FrameMaker is one of few ERP software programs that has...easier and faster using ERP software. The DIS-II ERP software program is FrameMaker by Frame Technology, Incorporated. FrameMaker uses the X window...functions, calculus, relations, and other complicated math applications. FrameMaker permits the user to define formats for master pages, reference pages

  2. Computer-assisted qualitative data analysis software.

    PubMed

    Cope, Diane G

    2014-05-01

    Advances in technology have provided new approaches for data collection methods and analysis for researchers. Data collection is no longer limited to paper-and-pencil format, and numerous methods are now available through Internet and electronic resources. With these techniques, researchers are not burdened with entering data manually and data analysis is facilitated by software programs. Quantitative research is supported by the use of computer software and provides ease in the management of large data sets and rapid analysis of numeric statistical methods. New technologies are emerging to support qualitative research with the availability of computer-assisted qualitative data analysis software (CAQDAS).CAQDAS will be presented with a discussion of advantages, limitations, controversial issues, and recommendations for this type of software use.

  3. Generating qualitative data by design: the Australian Longitudinal Study on Women's Health qualitative data collection.

    PubMed

    Tavener, Meredith; Chojenta, Catherine; Loxton, Deborah

    2016-07-15

    Objectives and importance of study: The purpose of this study was to illustrate how qualitative free-text comments, collected within the context of a health survey, represent a rich data source for understanding specific phenomena. Work conducted with data from the Australian Longitudinal Study on Women's Health (ALSWH) was used to demonstrate the breadth and depth of qualitative information that can be collected. The ALSWH has been collecting data on women's health since 1996, and represents a unique opportunity for understanding lived experiences across the lifecourse. A multiple case study design was used to demonstrate the techniques that researchers have used to manage free-text qualitative comments collected by the ALSWH. Eleven projects conducted using free-text comments are discussed according to the method of analysis. These methods include coding (both inductively and deductively), longitudinal analyses and software-based analyses. This work shows that free-text comments are a data resource in their own right, and have the potential to provide rich and valuable information about a wide variety of topics.

  4. Automated validation of patient safety clinical incident classification: macro analysis.

    PubMed

    Gupta, Jaiprakash; Patrick, Jon

    2013-01-01

    Patient safety is the buzz word in healthcare. Incident Information Management System (IIMS) is electronic software that stores clinical mishaps narratives in places where patients are treated. It is estimated that in one state alone over one million electronic text documents are available in IIMS. In this paper we investigate the data density available in the fields entered to notify an incident and the validity of the built in classification used by clinician to categories the incidents. Waikato Environment for Knowledge Analysis (WEKA) software was used to test the classes. Four statistical classifier based on J48, Naïve Bayes (NB), Naïve Bayes Multinominal (NBM) and Support Vector Machine using radial basis function (SVM_RBF) algorithms were used to validate the classes. The data pool was 10,000 clinical incidents drawn from 7 hospitals in one state in Australia. In first part of the study 1000 clinical incidents were selected to determine type and number of fields worth investigating and in the second part another 5448 clinical incidents were randomly selected to validate 13 clinical incident types. Result shows 74.6% of the cells were empty and only 23 fields had content over 70% of the time. The percentage correctly classified classes on four algorithms using categorical dataset ranged from 42 to 49%, using free-text datasets from 65% to 77% and using both datasets from 72% to 79%. Kappa statistic ranged from 0.36 to 0.4. for categorical data, from 0.61 to 0.74. for free-text and from 0.67 to 0.77 for both datasets. Similar increases in performance in the 3 experiments was noted on true positive rate, precision, F-measure and area under curve (AUC) of receiver operating characteristics (ROC) scores. The study demonstrates only 14 of 73 fields in IIMS have data that is usable for machine learning experiments. Irrespective of the type of algorithms used when all datasets are used performance was better. Classifier NBM showed best performance. We think the classifier can be improved further by reclassifying the most confused classes and there is scope to apply text mining tool on patient safety classifications.

  5. Orbiter subsystem hardware/software interaction analysis. Volume 8: Forward reaction control system

    NASA Technical Reports Server (NTRS)

    Becker, D. D.

    1980-01-01

    The results of the orbiter hardware/software interaction analysis for the AFT reaction control system are presented. The interaction between hardware failure modes and software are examined in order to identify associated issues and risks. All orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are discussed.

  6. Sneak Analysis Application Guidelines

    DTIC Science & Technology

    1982-06-01

    Hardware Program Change Cost Trend, Airborne Environment ....... ....................... 111 3-11 Relative Software Program Change Costs...113 3-50 Derived Software Program Change Cost by Phase,* Airborne Environment ..... ............... 114 3-51 Derived Software Program Change...Cost by Phase, Ground/Water Environment ... ............. .... 114 3-52 Total Software Program Change Costs ................ 115 3-53 Sneak Analysis

  7. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  8. Software for quantitative analysis of radiotherapy: overview, requirement analysis and design solutions.

    PubMed

    Zhang, Lanlan; Hub, Martina; Mang, Sarah; Thieke, Christian; Nix, Oliver; Karger, Christian P; Floca, Ralf O

    2013-06-01

    Radiotherapy is a fast-developing discipline which plays a major role in cancer care. Quantitative analysis of radiotherapy data can improve the success of the treatment and support the prediction of outcome. In this paper, we first identify functional, conceptional and general requirements on a software system for quantitative analysis of radiotherapy. Further we present an overview of existing radiotherapy analysis software tools and check them against the stated requirements. As none of them could meet all of the demands presented herein, we analyzed possible conceptional problems and present software design solutions and recommendations to meet the stated requirements (e.g. algorithmic decoupling via dose iterator pattern; analysis database design). As a proof of concept we developed a software library "RTToolbox" following the presented design principles. The RTToolbox is available as open source library and has already been tested in a larger-scale software system for different use cases. These examples demonstrate the benefit of the presented design principles. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  10. Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0

    NASA Technical Reports Server (NTRS)

    Knox, J. C.

    1996-01-01

    The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.

  11. Extending Prior Posts in Dyadic Online Text Chat

    ERIC Educational Resources Information Center

    Tudini, Vincenza

    2015-01-01

    This study explores whether chat users are able to extend prior, apparently completed posts in the dyadic online text chat context. Dyadic text chat has a unique turn-taking system, and most chat softwares do not permit users to monitor one another's written messages-in-progress. This is likely to impact on their use of online extensions as an…

  12. A Comparison of Text, Voice, and Screencasting Feedback to Online Students

    ERIC Educational Resources Information Center

    Orlando, John

    2016-01-01

    The emergence of simple video and voice recording software has allowed faculty to deliver online course content in a variety of rich formats. But most faculty are still using traditional text comments for feedback to students. The author launched a study comparing student and faculty perceptions of text, voice, and screencasting feedback. The…

  13. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  14. GSOSTATS Database: USAF Synchronous Satellite Catalog Data Conversion Software. User's Guide and Software Maintenance Manual, Version 2.1

    NASA Technical Reports Server (NTRS)

    Mallasch, Paul G.; Babic, Slavoljub

    1994-01-01

    The United States Air Force (USAF) provides NASA Lewis Research Center with monthly reports containing the Synchronous Satellite Catalog and the associated Two Line Mean Element Sets. The USAF Synchronous Satellite Catalog supplies satellite orbital parameters collected by an automated monitoring system and provided to Lewis Research Center as text files on magnetic tape. Software was developed to facilitate automated formatting, data normalization, cross-referencing, and error correction of Synchronous Satellite Catalog files before loading into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). This document contains the User's Guide and Software Maintenance Manual with information necessary for installation, initialization, start-up, operation, error recovery, and termination of the software application. It also contains implementation details, modification aids, and software source code adaptations for use in future revisions.

  15. A survey of Canadian medical physicists: software quality assurance of in‐house software

    PubMed Central

    Kelly, Diane

    2015-01-01

    This paper reports on a survey of medical physicists who write and use in‐house written software as part of their professional work. The goal of the survey was to assess the extent of in‐house software usage and the desire or need for related software quality guidelines. The survey contained eight multiple‐choice questions, a ranking question, and seven free text questions. The survey was sent to medical physicists associated with cancer centers across Canada. The respondents to the survey expressed interest in having guidelines to help them in their software‐related work, but also demonstrated extensive skills in the area of testing, safety, and communication. These existing skills form a basis for medical physicists to establish a set of software quality guidelines. PACS number: 87.55.Qr PMID:25679168

  16. Automated Authorship Attribution Using Advanced Signal Classification Techniques

    PubMed Central

    Ebrahimpour, Maryam; Putniņš, Tālis J.; Berryman, Matthew J.; Allison, Andrew; Ng, Brian W.-H.; Abbott, Derek

    2013-01-01

    In this paper, we develop two automated authorship attribution schemes, one based on Multiple Discriminant Analysis (MDA) and the other based on a Support Vector Machine (SVM). The classification features we exploit are based on word frequencies in the text. We adopt an approach of preprocessing each text by stripping it of all characters except a-z and space. This is in order to increase the portability of the software to different types of texts. We test the methodology on a corpus of undisputed English texts, and use leave-one-out cross validation to demonstrate classification accuracies in excess of 90%. We further test our methods on the Federalist Papers, which have a partly disputed authorship and a fair degree of scholarly consensus. And finally, we apply our methodology to the question of the authorship of the Letter to the Hebrews by comparing it against a number of original Greek texts of known authorship. These tests identify where some of the limitations lie, motivating a number of open questions for future work. An open source implementation of our methodology is freely available for use at https://github.com/matthewberryman/author-detection. PMID:23437047

  17. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  18. Using recurrence plot analysis for software execution interpretation and fault detection

    NASA Astrophysics Data System (ADS)

    Mosdorf, M.

    2015-09-01

    This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.

  19. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions.

    PubMed

    Shenoy, Shailesh M

    2016-07-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software's support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity.

  20. Analyzing qualitative data with computer software.

    PubMed Central

    Weitzman, E A

    1999-01-01

    OBJECTIVE: To provide health services researchers with an overview of the qualitative data analysis process and the role of software within it; to provide a principled approach to choosing among software packages to support qualitative data analysis; to alert researchers to the potential benefits and limitations of such software; and to provide an overview of the developments to be expected in the field in the near future. DATA SOURCES, STUDY DESIGN, METHODS: This article does not include reports of empirical research. CONCLUSIONS: Software for qualitative data analysis can benefit the researcher in terms of speed, consistency, rigor, and access to analytic methods not available by hand. Software, however, is not a replacement for methodological training. PMID:10591282

  1. GCE Data Toolbox for MATLAB - a software framework for automating environmental data processing, quality control and documentation

    NASA Astrophysics Data System (ADS)

    Sheldon, W.; Chamblee, J.; Cary, R. H.

    2013-12-01

    Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.

  2. An Analysis of Mission Critical Computer Software in Naval Aviation

    DTIC Science & Technology

    1991-03-01

    No. Task No. Work Unit Accesion Number 11. TITLE (Include Security Classification) AN ANALYSIS OF MISSION CRITICAL COMPUTER SOFTWARE IN NAVAL AVIATION...software development schedules were sustained without a milestone change being made. Also, software that was released to the fleet had no major...fleet contain any major defects? This research has revealed that only about half of the original software development schedules were sustained without a

  3. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, W. A.; Lepicovsky, J.

    1992-01-01

    The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  4. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1992-01-01

    The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  5. Spacelab user implementation assessment study. (Software requirements analysis). Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The engineering analyses and evaluation studies conducted for the Software Requirements Analysis are discussed. Included are the development of the study data base, synthesis of implementation approaches for software required by both mandatory onboard computer services and command/control functions, and identification and implementation of software for ground processing activities.

  6. A Study on Environmental Research Trends Using Text-Mining Method - Focus on Spatial information and ICT -

    NASA Astrophysics Data System (ADS)

    Lee, M. J.; Oh, K. Y.; Joung-ho, L.

    2016-12-01

    Recently there are many research about analysing the interaction between entities by text-mining analysis in various fields. In this paper, we aimed to quantitatively analyse research-trends in the area of environmental research relating either spatial information or ICT (Information and Communications Technology) by Text-mining analysis. To do this, we applied low-dimensional embedding method, clustering analysis, and association rule to find meaningful associative patterns of key words frequently appeared in the articles. As the authors suppose that KCI (Korea Citation Index) articles reflect academic demands, total 1228 KCI articles that have been published from 1996 to 2015 were reviewed and analysed by Text-mining method. First, we derived KCI articles from NDSL(National Discovery for Science Leaders) site. And then we pre-processed their key-words elected from abstract and then classified those in separable sectors. We investigated the appearance rates and association rule of key-words for articles in the two fields: spatial-information and ICT. In order to detect historic trends, analysis was conducted separately for the four periods: 1996-2000, 2001-2005, 2006-2010, 2011-2015. These analysis were conducted with the usage of R-software. As a result, we conformed that environmental research relating spatial information mainly focused upon such fields as `GIS(35%)', `Remote-Sensing(25%)', `environmental theme map(15.7%)'. Next, `ICT technology(23.6%)', `ICT service(5.4%)', `mobile(24%)', `big data(10%)', `AI(7%)' are primarily emerging from environmental research relating ICT. Thus, from the analysis results, this paper asserts that research trends and academic progresses are well-structured to review recent spatial information and ICT technology and the outcomes of the analysis can be an adequate guidelines to establish environment policies and strategies. KEY WORDS: Big data, Test-mining, Environmental research, Spatial-information, ICT Acknowledgements: The authors appreciate the support that this study has received from `Building application frame of environmental issues, to respond to the latest ICT trends'.

  7. Text Information Extraction System (TIES) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    TIES is a service based software system for acquiring, deidentifying, and processing clinical text reports using natural language processing, and also for querying, sharing and using this data to foster tissue and image based research, within and between institutions.

  8. A Study of the Use of Web-Based Conferencing Software To Enhance Learning Environments in Teacher Education.

    ERIC Educational Resources Information Center

    Sosin, Adrienne

    This action research study of electronic conferencing highlights the online portions of teacher education courses at Pace University, New York. The study explores the infusion of technology into teaching and investigates the utility of a particular type of discussion software for learning. Data sources include texts of electronic conversations,…

  9. Design and Empirical Evaluation of Search Software for Legal Professionals on the WWW.

    ERIC Educational Resources Information Center

    Dempsey, Bert J.; Vreeland, Robert C.; Sumner, Robert G., Jr.; Yang, Kiduk

    2000-01-01

    Discussion of effective search aids for legal researchers on the World Wide Web focuses on the design and evaluation of two software systems developed to explore models for browsing and searching across a user-selected set of Web sites. Describes crawler-enhanced search engines, filters, distributed full-text searching, and natural language…

  10. Investigating Deaf Students' Use of Visual Multimedia Resources in Reading Comprehension

    ERIC Educational Resources Information Center

    Nikolaraizi, Magda; Vekiri, Ioanna; Easterbrooks, Susan R.

    2013-01-01

    A mixed research design was used to examine how deaf students used the visual resources of a multimedia software package that was designed to support reading comprehension. The viewing behavior of 8 deaf students, ages 8-12 years, was recorded during their interaction with multimedia software that included narrative texts enriched with Greek Sign…

  11. Designing, Developing and Implementing a Software Tool for Scenario Based Learning

    ERIC Educational Resources Information Center

    Norton, Geoff; Taylor, Mathew; Stewart, Terry; Blackburn, Greg; Jinks, Audrey; Razdar, Bahareh; Holmes, Paul; Marastoni, Enrique

    2012-01-01

    The pedagogical value of problem-based and inquiry-based learning activities has led to increased use of this approach in many courses. While scenarios or case studies were initially presented to learners as text-based material, the development of modern software technology provides the opportunity to deliver scenarios as e-learning modules,…

  12. Student Perceptions of a Trial of Electronic Text Matching Software: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Green, David; Lindemann, Iris; Marshall, Kelly; Wilkinson, Grette

    2005-01-01

    It is accepted that using electronic detection methods has benefits within an overall strategy to promote academic integrity in an institution. Little attention has been paid to obtaining student perceptions to evaluate the cost/benefit of using such methods. This study reports on the evaluation of a trial of Turnitin software. 728 students…

  13. UNIX as an environment for producing numerical software

    NASA Technical Reports Server (NTRS)

    Schryer, N. L.

    1978-01-01

    The UNIX operating system supports a number of software tools; a mathematical equation-setting language, a phototypesetting language, a FORTRAN preprocessor language, a text editor, and a command interpreter. The design, implementation, documentation, and maintenance of a portable FORTRAN test of the floating-point arithmetic unit of a computer is used to illustrate these tools at work.

  14. Relating Communications Mode Choice and Teamwork Quality: Conversational versus Textual Communication in IT System and Software Development Teams

    ERIC Educational Resources Information Center

    Smith, James Robert

    2012-01-01

    This cross-sectional study explored how IT system and software development team members communicated in the workplace and whether teams that used more verbal communication (and less text-based communication) experienced higher levels of collaboration as measured using the Teamwork Quality (TWQ) scale. Although computer-mediated communication tools…

  15. Spiking Neural Classifier with Lumped Dendritic Nonlinearity and Binary Synapses: A Current Mode VLSI Implementation and Analysis.

    PubMed

    Bhaduri, Aritra; Banerjee, Amitava; Roy, Subhrajit; Kar, Sougata; Basu, Arindam

    2018-03-01

    We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.

  16. Models Extracted from Text for System-Software Safety Analyses

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2010-01-01

    This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.

  17. Telemetry-Enhancing Scripts

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.

    2009-01-01

    Scripts Providing a Cool Kit of Telemetry Enhancing Tools (SPACKLE) is a set of software tools that fill gaps in capabilities of other software used in processing downlinked data in the Mars Exploration Rovers (MER) flight and test-bed operations. SPACKLE tools have helped to accelerate the automatic processing and interpretation of MER mission data, enabling non-experts to understand and/or use MER query and data product command simulation software tools more effectively. SPACKLE has greatly accelerated some operations and provides new capabilities. The tools of SPACKLE are written, variously, in Perl or the C or C++ language. They perform a variety of search and shortcut functions that include the following: Generating text-only, Event Report-annotated, and Web-enhanced views of command sequences; Labeling integer enumerations with their symbolic meanings in text messages and engineering channels; Systematic detecting of corruption within data products; Generating text-only displays of data-product catalogs including downlink status; Validating and labeling of commands related to data products; Performing of convenient searches of detailed engineering data spanning multiple Martian solar days; Generating tables of initial conditions pertaining to engineering, health, and accountability data; Simplified construction and simulation of command sequences; and Fast time format conversions and sorting.

  18. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  19. GammaLib and ctools. A software framework for the analysis of astronomical gamma-ray data

    NASA Astrophysics Data System (ADS)

    Knödlseder, J.; Mayer, M.; Deil, C.; Cayrou, J.-B.; Owen, E.; Kelley-Hoskins, N.; Lu, C.-C.; Buehler, R.; Forest, F.; Louge, T.; Siejkowski, H.; Kosack, K.; Gerard, L.; Schulz, A.; Martin, P.; Sanchez, D.; Ohm, S.; Hassan, T.; Brau-Nogué, S.

    2016-08-01

    The field of gamma-ray astronomy has seen important progress during the last decade, yet to date no common software framework has been developed for the scientific analysis of gamma-ray telescope data. We propose to fill this gap by means of the GammaLib software, a generic library that we have developed to support the analysis of gamma-ray event data. GammaLib was written in C++ and all functionality is available in Python through an extension module. Based on this framework we have developed the ctools software package, a suite of software tools that enables flexible workflows to be built for the analysis of Imaging Air Cherenkov Telescope event data. The ctools are inspired by science analysis software available for existing high-energy astronomy instruments, and they follow the modular ftools model developed by the High Energy Astrophysics Science Archive Research Center. The ctools were written in Python and C++, and can be either used from the command line via shell scripts or directly from Python. In this paper we present the GammaLib and ctools software versions 1.0 that were released at the end of 2015. GammaLib and ctools are ready for the science analysis of Imaging Air Cherenkov Telescope event data, and also support the analysis of Fermi-LAT data and the exploitation of the COMPTEL legacy data archive. We propose using ctools as the science tools software for the Cherenkov Telescope Array Observatory.

  20. Using Twitter to access the human right of communication for people who use Augmentative and Alternative Communication (AAC).

    PubMed

    Hemsley, Bronwyn; Palmer, Stuart; Dann, Stephen; Balandin, Susan

    2018-02-01

    Articles 19, 26 and 27 of the Universal Declaration of Human Rights and Articles 4, 9 and 21 of the Convention on the Rights of Persons with Disabilities promote the human rights of communication, education, use of technology and access to information. Social media is an important form of online communication, and Twitter increases users' visibility, influence and reach online. The aim of this sociotechnical research was to determine the impact of teaching three people who use Augmentative and Alternative Communication (AAC) to use Twitter. Three participants were trained in ways of using Twitter strategically. Data collected from participants' Twitter profiles were examined to determine the impact of training on Twitter follower count, frequency of tweeting, tweet content and the development of social networks. Data were also examined using (1) KH Coder software analysis and visualisation of co-occurring networks in the text data, based on word frequencies; and (2) Gephi software analysis to show the Twitter network for each participant. Two participants showed an improvement in Twitter skills and strategies. Twitter can be used to improve social connectedness of people who use AAC, and should not be overlooked in relation to communication rights.

  1. DnaSAM: Software to perform neutrality testing for large datasets with complex null models.

    PubMed

    Eckert, Andrew J; Liechty, John D; Tearse, Brandon R; Pande, Barnaly; Neale, David B

    2010-05-01

    Patterns of DNA sequence polymorphisms can be used to understand the processes of demography and adaptation within natural populations. High-throughput generation of DNA sequence data has historically been the bottleneck with respect to data processing and experimental inference. Advances in marker technologies have largely solved this problem. Currently, the limiting step is computational, with most molecular population genetic software allowing a gene-by-gene analysis through a graphical user interface. An easy-to-use analysis program that allows both high-throughput processing of multiple sequence alignments along with the flexibility to simulate data under complex demographic scenarios is currently lacking. We introduce a new program, named DnaSAM, which allows high-throughput estimation of DNA sequence diversity and neutrality statistics from experimental data along with the ability to test those statistics via Monte Carlo coalescent simulations. These simulations are conducted using the ms program, which is able to incorporate several genetic parameters (e.g. recombination) and demographic scenarios (e.g. population bottlenecks). The output is a set of diversity and neutrality statistics with associated probability values under a user-specified null model that are stored in easy to manipulate text file. © 2009 Blackwell Publishing Ltd.

  2. Elements of strategic capability for software outsourcing enterprises based on the resource

    NASA Astrophysics Data System (ADS)

    Shi, Wengeng

    2011-10-01

    Software outsourcing enterprises as an emerging high-tech enterprises, the rise of the speed and the number was very amazing. In addition to Chinese software outsourcing for giving preferential policies, the software outsourcing business has its ability to upgrade, and in general the software companies have not had the related characteristics. View from the resource base of the theory, the analysis software outsourcing companies have the ability and resources of rare and valuable and non-mimic, we try to give an initial framework for theoretical analysis based on this.

  3. Loose, Falling Characters and Sentences: The Persistence of the OCR Problem in Digital Repository E-Books

    ERIC Educational Resources Information Center

    Kichuk, Diana

    2015-01-01

    The electronic conversion of scanned image files to readable text using optical character recognition (OCR) software and the subsequent migration of raw OCR text to e-book text file formats are key remediation or media conversion technologies used in digital repository e-book production. Despite real progress, the OCR problem of reliability and…

  4. Teach Your Computer to Read: Scanners and Optical Character Recognition.

    ERIC Educational Resources Information Center

    Marsden, Jim

    1993-01-01

    Desktop scanners can be used with a software technology called optical character recognition (OCR) to convert the text on virtually any paper document into an electronic form. OCR offers educators new flexibility in incorporating text into tests, lesson plans, and other materials. (MLF)

  5. Integrating Query of Relational and Textual Data in Clinical Databases: A Case Study

    PubMed Central

    Fisk, John M.; Mutalik, Pradeep; Levin, Forrest W.; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    Objectives: The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Design: Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Measurements: Results are relevance-ranked using either “total documents per patient” or “report type weighting.” Results: Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. Conclusion: A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers. PMID:12509355

  6. The Comparison of VLBI Data Analysis Using Software Globl and Globk

    NASA Astrophysics Data System (ADS)

    Guangli, W.; Xiaoya, W.; Jinling, L.; Wenyao, Z.

    The comparison of different geodetic data analysis software is one of the quite of- ten mentioned topics. In this paper we try to find out the difference between software GLOBL and GLOBK when use them to process the same set of VLBI data. GLOBL is a software developed by VLBI team, geodesy branch, GSFC/NASA to process geode- tic VLBI data using algorithm of arc-parameter-elimination, while GLOBK using al- gorithm of kalman filtering is mainly used in GPS data analysis, and it is also used in VLBI data analysis. Our work focus on whether there are significant difference when use the two softwares to analyze the same VLBI data set and investigate the reasons caused the difference.

  7. ACES: Space shuttle flight software analysis expert system

    NASA Technical Reports Server (NTRS)

    Satterwhite, R. Scott

    1990-01-01

    The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.

  8. The Effects of Development Team Skill on Software Product Quality

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.

    2006-01-01

    This paper provides an analysis of the effect of the skill/experience of the software development team on the quality of the final software product. A method for the assessment of software development team skill and experience is proposed, and was derived from a workforce management tool currently in use by the National Aeronautics and Space Administration. Using data from 26 smallscale software development projects, the team skill measures are correlated to 5 software product quality metrics from the ISO/IEC 9126 Software Engineering Product Quality standard. in the analysis of the results, development team skill is found to be a significant factor in the adequacy of the design and implementation. In addition, the results imply that inexperienced software developers are tasked with responsibilities ill-suited to their skill level, and thus have a significant adverse effect on the quality of the software product. Keywords: software quality, development skill, software metrics

  9. Knowledge Domain and Emerging Trends in Organic Photovoltaic Technology: A Scientometric Review Based on CiteSpace Analysis.

    PubMed

    Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie

    2017-01-01

    To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.

  10. Knowledge Domain and Emerging trends in Organic Photovoltaic Technology: A Scientometric Review Based on CiteSpace Analysis

    NASA Astrophysics Data System (ADS)

    Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie

    2017-09-01

    To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.

  11. SAO mission support software and data standards, version 1.0

    NASA Technical Reports Server (NTRS)

    Hsieh, P.

    1993-01-01

    This document defines the software developed by the SAO AXAF Mission Support (MS) Program and defines standards for the software development process and control of data products generated by the software. The SAO MS is tasked to develop and use software to perform a variety of functions in support of the AXAF mission. Software is developed by software engineers and scientists, and commercial off-the-shelf (COTS) software is used either directly or customized through the use of scripts to implement analysis procedures. Software controls real-time laboratory instruments, performs data archiving, displays data, and generates model predictions. Much software is used in the analysis of data to generate data products that are required by the AXAF project, for example, on-orbit mirror performance predictions or detailed characterization of the mirror reflection performance with energy.

  12. Why do nursing homes close? An analysis of newspaper articles.

    PubMed

    Fisher, Andrew; Castle, Nicholas

    2012-01-01

    Using Non-numerical Unstructured Data Indexing Searching and Theorizing (NUD'IST) software to extract and examine keywords from text, the authors explored the phenomenon of nursing home closure through an analysis of 30 major-market newspapers over a period of 66 months (January 1, 1999 to June 1, 2005). Newspaper articles typically represent a careful analysis of staff impressions via interviews, managerial perspectives, and financial records review. There is a current reliance on the synthesis of information from large regulatory databases such as the Online Survey Certification And Reporting database, the California Office of Statewide Healthcare Planning and Development database, and Area Resource Files. Although such databases permit the construction of studies capable of revealing some reasons for nursing home closure, they are hampered by the confines of the data entered. Using our analysis of newspaper articles, the authors are able to add further to their understanding of nursing home closures.

  13. SMARTSware for SMARTS users to facilitate data reduction and data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-01-01

    The software package SMARTSware is made by one of the instrument scientist on the engineering neutron diffractometer SMARTS at the Lujan Center, a national user facility at Los Alamos Neutron Scattering Center (LANSCE). The purpose of the software is to facilitate the data analysis of powder diffraction data recorded at the Lujan Center, and hence the target audience is users performing experiments at the one of the powder diffractometers (SMARTS, HIPPO, HIPD and NPDF) at the Lujan Center. The beam time at the Lujan Center is allocated by peer review of internally and extenally submitted proposals, and therefore many ofmore » the users who are granted beam time are from the international science community. Generally, the users are only at the Lujan Center for a short period of time while they are performing the experiments, and often they leave with several data sets that have not been analyzed. The distribution of the SMARTSware software package will minimize their efforts when analyzing the data once they are back at their institution. Description of software: There are two main parts of the software; a part used to generate instrument parameter files from a set of calibration runs (Smartslparm, SmartsBin, SmartsFitDif and SmartsFitspec); and a part that facilitates the batch refinement of multiple diffraction patterns (SmartsRunRep, SmartsABC, SmartsSPF and SmartsExtract). The former part may only be peripheral to most users, but is a critical part of the instrument scientists' efforts in calibrating their instruments. The latter part is highly applicable to the users as they often need to analyze or re-analyze large sets of data. The programs within the SMARTSware package heavily rely on GSAS for the Rietveld and single peak refinements of diffraction data. GSAS (General Structure Analysis System) is a public available software also originating from LANL. Subroutines and libraries from the NeXus project (a world wide trust to standardize diffraction data formats) and National Center for Supercomputing Applications (NCSA) at the University of Illinois (the Hierarchical Data Format Software Library and Utilities) are used in the programs. All these subroutines and libraries are publicly available through the GNU Public License and/or Freeware. The package also contains sample input and output text files and a manual (LA-UR 04-6581). The executables and sample files will be available for down load at http://public.lanl.gov/clausen/SMARTSware.html and ftp://lansce.lanl.gov/clausen/SMARTSware/SMARTSware.zip, but the source codes will only be made available by written request to clausen@lanl.gov.« less

  14. Long-term Preservation of Data Analysis Capabilities

    NASA Astrophysics Data System (ADS)

    Gabriel, C.; Arviset, C.; Ibarra, A.; Pollock, A.

    2015-09-01

    While the long-term preservation of scientific data obtained by large astrophysics missions is ensured through science archives, the issue of data analysis software preservation has hardly been addressed. Efforts by large data centres have contributed so far to maintain some instrument or mission-specific data reduction packages on top of high-level general purpose data analysis software. However, it is always difficult to keep software alive without support and maintenance once the active phase of a mission is over. This is especially difficult in the budgetary model followed by space agencies. We discuss the importance of extending the lifetime of dedicated data analysis packages and review diverse strategies under development at ESA using new paradigms such as Virtual Machines, Cloud Computing, and Software as a Service for making possible full availability of data analysis and calibration software for decades at minimal cost.

  15. Teaching meta-analysis using MetaLight.

    PubMed

    Thomas, James; Graziosi, Sergio; Higgins, Steve; Coe, Robert; Torgerson, Carole; Newman, Mark

    2012-10-18

    Meta-analysis is a statistical method for combining the results of primary studies. It is often used in systematic reviews and is increasingly a method and topic that appears in student dissertations. MetaLight is a freely available software application that runs simple meta-analyses and contains specific functionality to facilitate the teaching and learning of meta-analysis. While there are many courses and resources for meta-analysis available and numerous software applications to run meta-analyses, there are few pieces of software which are aimed specifically at helping those teaching and learning meta-analysis. Valuable teaching time can be spent learning the mechanics of a new software application, rather than on the principles and practices of meta-analysis. We discuss ways in which the MetaLight tool can be used to present some of the main issues involved in undertaking and interpreting a meta-analysis. While there are many software tools available for conducting meta-analysis, in the context of a teaching programme such software can require expenditure both in terms of money and in terms of the time it takes to learn how to use it. MetaLight was developed specifically as a tool to facilitate the teaching and learning of meta-analysis and we have presented here some of the ways it might be used in a training situation.

  16. Spacecraft Trajectory Analysis and Mission Planning Simulation (STAMPS) Software

    NASA Technical Reports Server (NTRS)

    Puckett, Nancy; Pettinger, Kris; Hallstrom,John; Brownfield, Dana; Blinn, Eric; Williams, Frank; Wiuff, Kelli; McCarty, Steve; Ramirez, Daniel; Lamotte, Nicole; hide

    2014-01-01

    STAMPS simulates either three- or six-degree-of-freedom cases for all spacecraft flight phases using translated HAL flight software or generic GN&C models. Single or multiple trajectories can be simulated for use in optimization and dispersion analysis. It includes math models for the vehicle and environment, and currently features a "C" version of shuttle onboard flight software. The STAMPS software is used for mission planning and analysis within ascent/descent, rendezvous, proximity operations, and navigation flight design areas.

  17. The new information technologies and psychiatry.

    PubMed

    Fauman, M A

    1989-09-01

    The author reviews the history and technology of the microcomputer and discusses the various classes of software that are presently available. Three major categories of software are described: numeric data processing, text processing, and communications. The application of this software to psychiatric education and practice is briefly discussed. A short curriculum on computers for psychiatric residents is outlined, and a brief bibliography of the recent relevant literature on computer applications to medicine and psychiatry is presented. Predictions are made about the future direction of computer technology and its application to psychiatry.

  18. On-Orbit Software Analysis

    NASA Technical Reports Server (NTRS)

    Moran, Susanne I.

    2004-01-01

    The On-Orbit Software Analysis Research Infusion Project was done by Intrinsyx Technologies Corporation (Intrinsyx) at the National Aeronautics and Space Administration (NASA) Ames Research Center (ARC). The Project was a joint collaborative effort between NASA Codes IC and SL, Kestrel Technology (Kestrel), and Intrinsyx. The primary objectives of the Project were: Discovery and verification of software program properties and dependencies, Detection and isolation of software defects across different versions of software, and Compilation of historical data and technical expertise for future applications

  19. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  20. How Safe Is Control Software

    NASA Technical Reports Server (NTRS)

    Dunn, William R.; Corliss, Lloyd D.

    1991-01-01

    Paper examines issue of software safety. Presents four case histories of software-safety analysis. Concludes that, to be safe, software, for all practical purposes, must be free of errors. Backup systems still needed to prevent catastrophic software failures.

  1. Design and implementation of software for automated quality control and data analysis for a complex LC/MS/MS assay for urine opiates and metabolites.

    PubMed

    Dickerson, Jane A; Schmeling, Michael; Hoofnagle, Andrew N; Hoffman, Noah G

    2013-01-16

    Mass spectrometry provides a powerful platform for performing quantitative, multiplexed assays in the clinical laboratory, but at the cost of increased complexity of analysis and quality assurance calculations compared to other methodologies. Here we describe the design and implementation of a software application that performs quality control calculations for a complex, multiplexed, mass spectrometric analysis of opioids and opioid metabolites. The development and implementation of this application improved our data analysis and quality assurance processes in several ways. First, use of the software significantly improved the procedural consistency for performing quality control calculations. Second, it reduced the amount of time technologists spent preparing and reviewing the data, saving on average over four hours per run, and in some cases improving turnaround time by a day. Third, it provides a mechanism for coupling procedural and software changes with the results of each analysis. We describe several key details of the implementation including the use of version control software and automated unit tests. These generally useful software engineering principles should be considered for any software development project in the clinical lab. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  3. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  4. Analysis of a hardware and software fault tolerant processor for critical applications

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1993-01-01

    Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.

  5. The Negotiations of Group Authorship among Second Graders Using Multimedia Composing Software. Apple Classrooms of Tomorrow.

    ERIC Educational Resources Information Center

    Reilly, Brian

    Beginning with a review of relevant literature on learning and computers, this report focuses on a group of five second graders in the process of creating a multimedia presentation for their class. Using "StoryShow," software that combines images, sound, and text, the students took on a variety of production roles. Each one contributed…

  6. JRTF: A Flexible Software Framework for Real-Time Control in Magnetic Confinement Nuclear Fusion Experiments

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.

    2016-04-01

    The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.

  7. Adolescent Female Text Messaging Preferences to Prevent Pregnancy After an Emergency Department Visit: A Qualitative Analysis

    PubMed Central

    Schnall, Rebecca; Stockwell, Melissa S; Castaño, Paula M; Higgins, Tracy; Westhoff, Carolyn; Santelli, John; Dayan, Peter S

    2016-01-01

    Background Over 15 million adolescents use the emergency department (ED) each year in the United States. Adolescent females who use the ED for medical care have been found to be at high risk for unintended pregnancy. Given that adolescents represent the largest users of text messaging and are receptive to receiving text messages related to their sexual health, the ED visit represents an opportunity for intervention. Objective The aim of this qualitative study was to explore interest in and preferences for the content, frequency, and timing of an ED-based text message intervention to prevent pregnancy for adolescent females. Methods We conducted semistructured, open-ended interviews in one urban ED in the United States with adolescent females aged 14-19 years. Eligible subjects were adolescents who were sexually active in the past 3 months, presented to the ED for a reproductive health complaint, owned a mobile phone, and did not use effective contraception. Using an interview guide, enrollment continued until saturation of key themes. The investigators designed sample text messages using the Health Beliefs Model and participants viewed these on a mobile phone. The team recorded, transcribed, and coded interviews based on thematic analysis using the qualitative analysis software NVivo and Excel. Results Participants (n=14) were predominantly Hispanic (13/14; 93%), insured (13/14; 93%), ED users in the past year (12/14; 86%), and frequent text users (10/14; 71% had sent or received >30 texts per day). All were interested in receiving text messages from the ED about pregnancy prevention, favoring messages that were “brief,” “professional,” and “nonaccusatory.” Respondents favored texts with links to websites, repeated information regarding places to receive “confidential” care, and focused information on contraception options and misconceptions. Preferences for text message frequency varied from daily to monthly, with random hours of delivery to maintain “surprise.” No participant feared that text messages would violate her privacy. Conclusions Adolescent female patients at high pregnancy risk are interested in ED-based pregnancy prevention provided by texting. Understanding preferences for the content, frequency, and timing of messages can guide in designing future interventions in the ED. PMID:27687855

  8. RGG: A general GUI Framework for R scripts

    PubMed Central

    Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert

    2009-01-01

    Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356

  9. State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation

    DTIC Science & Technology

    2014-07-01

    preclude in-depth analysis, and widespread use of a Software -as-a- Service ( SaaS ) model that limits data availability and application to DoD systems...provide mobile application analysis using a Software - as-a- Service ( SaaS ) model. In this case, any software to be analyzed must be sent to the...tools are only available through a SaaS model. The widespread use of a Software -as-a- Service ( SaaS ) model as a sole evaluation model limits data

  10. A Method for Populating the Knowledge Base of AFIT’s Domain-Oriented Application Composition System

    DTIC Science & Technology

    1993-12-01

    Analysis ( FODA ). The approach identifies prominent features (similarities) and distinctive features (differences) of software systems within an... analysis approaches we have summarized, the re- searchers described FODA in sufficient detail to use on large domain analysis projects (ones with...Software Technology Center, July 1991. 18. Kang, Kyo C. and others. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report, Software

  11. Selection of software for mechanical engineering undergraduates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheah, C. T.; Yin, C. S.; Halim, T.

    A major problem with the undergraduate mechanical course is the limited exposure of students to software packages coupled with the long learning curve on the existing software packages. This work proposes the use of appropriate software packages for the entire mechanical engineering curriculum to ensure students get sufficient exposure real life design problems. A variety of software packages are highlighted as being suitable for undergraduate work in mechanical engineering, e.g. simultaneous non-linear equations; uncertainty analysis; 3-D modeling software with the FEA; analysis tools for the solution of problems in thermodynamics, fluid mechanics, mechanical system design, and solid mechanics.

  12. A Case Study of Measuring Process Risk for Early Insights into Software Safety

    NASA Technical Reports Server (NTRS)

    Layman, Lucas; Basili, Victor; Zelkowitz, Marvin V.; Fisher, Karen L.

    2011-01-01

    In this case study, we examine software safety risk in three flight hardware systems in NASA's Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TPRM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk.

  13. Knowledge and utilization of computer-software for statistics among Nigerian dentists.

    PubMed

    Chukwuneke, F N; Anyanechi, C E; Obiakor, A O; Amobi, O; Onyejiaka, N; Alamba, I

    2013-01-01

    The use of computer soft ware for generation of statistic analysis has transformed health information and data to simplest form in the areas of access, storage, retrieval and analysis in the field of research. This survey therefore was carried out to assess the level of knowledge and utilization of computer software for statistical analysis among dental researchers in eastern Nigeria. Questionnaires on the use of computer software for statistical analysis were randomly distributed to 65 practicing dental surgeons of above 5 years experience in the tertiary academic hospitals in eastern Nigeria. The focus was on: years of clinical experience; research work experience; knowledge and application of computer generated software for data processing and stastistical analysis. Sixty-two (62/65; 95.4%) of these questionnaires were returned anonymously, which were used in our data analysis. Twenty-nine (29/62; 46.8%) respondents fall within those with 5-10 years of clinical experience out of which none has completed the specialist training programme. Practitioners with above 10 years clinical experiences were 33 (33/62; 53.2%) out of which 15 (15/33; 45.5%) are specialists representing 24.2% (15/62) of the total number of respondents. All the 15 specialists are actively involved in research activities and only five (5/15; 33.3%) can utilize software statistical analysis unaided. This study has i dentified poor utilization of computer software for statistic analysis among dental researchers in eastern Nigeria. This is strongly associated with lack of exposure on the use of these software early enough especially during the undergraduate training. This call for introduction of computer training programme in dental curriculum to enable practitioners develops the attitude of using computer software for their research.

  14. Melanie II--a third-generation software package for analysis of two-dimensional electrophoresis images: I. Features and user interface.

    PubMed

    Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F

    1997-12-01

    Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.

  15. FunRich proteomics software analysis, let the fun begin!

    PubMed

    Benito-Martin, Alberto; Peinado, Héctor

    2015-08-01

    Protein MS analysis is the preferred method for unbiased protein identification. It is normally applied to a large number of both small-scale and high-throughput studies. However, user-friendly computational tools for protein analysis are still needed. In this issue, Mathivanan and colleagues (Proteomics 2015, 15, 2597-2601) report the development of FunRich software, an open-access software that facilitates the analysis of proteomics data, providing tools for functional enrichment and interaction network analysis of genes and proteins. FunRich is a reinterpretation of proteomic software, a standalone tool combining ease of use with customizable databases, free access, and graphical representations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Reference Model for Software and System Inspections. White Paper

    NASA Technical Reports Server (NTRS)

    He, Lulu; Shull, Forrest

    2009-01-01

    Software Quality Assurance (SQA) is an important component of the software development process. SQA processes provide assurance that the software products and processes in the project life cycle conform to their specified requirements by planning, enacting, and performing a set of activities to provide adequate confidence that quality is being built into the software. Typical techniques include: (1) Testing (2) Simulation (3) Model checking (4) Symbolic execution (5) Management reviews (6) Technical reviews (7) Inspections (8) Walk-throughs (9) Audits (10) Analysis (complexity analysis, control flow analysis, algorithmic analysis) (11) Formal method Our work over the last few years has resulted in substantial knowledge about SQA techniques, especially the areas of technical reviews and inspections. But can we apply the same QA techniques to the system development process? If yes, what kind of tailoring do we need before applying them in the system engineering context? If not, what types of QA techniques are actually used at system level? And, is there any room for improvement.) After a brief examination of the system engineering literature (especially focused on NASA and DoD guidance) we found that: (1) System and software development process interact with each other at different phases through development life cycle (2) Reviews are emphasized in both system and software development. (Figl.3). For some reviews (e.g. SRR, PDR, CDR), there are both system versions and software versions. (3) Analysis techniques are emphasized (e.g. Fault Tree Analysis, Preliminary Hazard Analysis) and some details are given about how to apply them. (4) Reviews are expected to use the outputs of the analysis techniques. In other words, these particular analyses are usually conducted in preparation for (before) reviews. The goal of our work is to explore the interaction between the Quality Assurance (QA) techniques at the system level and the software level.

  17. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    NASA Astrophysics Data System (ADS)

    Sun, Z. J.; Wells, D.; Segebade, C.; Green, J.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  18. Second Generation Product Line Engineering Takes Hold in the DoD

    DTIC Science & Technology

    2014-01-01

    Feature- Oriented Domain Analysis ( FODA ) Feasibility Study” (CMU/SEI-90- TR-021, ADA235785). Pittsburgh, PA: Software Engineering Institute...software product line engineering and software architecture documentation and analysis . Clements is co-author of three practitioner-oriented books about

  19. The Holistic Targeting (HOT) Methodology as the Means to Improve Information Operations (IO) Target Development and Prioritization

    DTIC Science & Technology

    2008-09-01

    software facilitate targeting problem understanding and the network analysis tool, Palantir , as an efficient and tailored semi-automated means to...the use of compendium software facilitate targeting problem understanding and the network analysis tool, Palantir , as an efficient and tailored semi...OBJECTIVES USING COMPENDIUM SOFTWARE .....63 E. HOT TARGET PRIORITIZATION AND DEVELOPMENT USING PALANTIR SOFTWARE .................................69 1

  20. Software Defined Network Monitoring Scheme Using Spectral Graph Theory and Phantom Nodes

    DTIC Science & Technology

    2014-09-01

    networks is the emergence of software - defined networking ( SDN ) [1]. SDN has existed for the...Chapter III for network monitoring. A. SOFTWARE DEFINED NETWORKS SDNs provide a new and innovative method to simplify network hardware by logically...and R. Giladi, “Performance analysis of software - defined networking ( SDN ),” in Proc. of IEEE 21st International Symposium on Modeling, Analysis

  1. SU-E-T-191: PITSTOP: Process Improvement Techniques, Software Tools, and Operating Principles for a Quality Initiative Discovery Framework.

    PubMed

    Siochi, R

    2012-06-01

    To develop a quality initiative discovery framework using process improvement techniques, software tools and operating principles. Process deviations are entered into a radiotherapy incident reporting database. Supervisors use an in-house Event Analysis System (EASy) to discuss incidents with staff. Major incidents are analyzed with an in-house Fault Tree Analysis (FTA). A meta-Analysis is performed using association, text mining, key word clustering, and differential frequency analysis. A key operating principle encourages the creation of forcing functions via rapid application development. 504 events have been logged this past year. The results for the key word analysis indicate that the root cause for the top ranked key words was miscommunication. This was also the root cause found from association analysis, where 24% of the time that an event involved a physician it also involved a nurse. Differential frequency analysis revealed that sharp peaks at week 27 were followed by 3 major incidents, two of which were dose related. The peak was largely due to the front desk which caused distractions in other areas. The analysis led to many PI projects but there is still a major systematic issue with the use of forms. The solution we identified is to implement Smart Forms to perform error checking and interlocking. Our first initiative replaced our daily QA checklist with a form that uses custom validation routines, preventing therapists from proceeding with treatments until out of tolerance conditions are corrected. PITSTOP has increased the number of quality initiatives in our department, and we have discovered or confirmed common underlying causes of a variety of seemingly unrelated errors. It has motivated the replacement of all forms with smart forms. © 2012 American Association of Physicists in Medicine.

  2. Means of storage and automated monitoring of versions of text technical documentation

    NASA Astrophysics Data System (ADS)

    Leonovets, S. A.; Shukalov, A. V.; Zharinov, I. O.

    2018-03-01

    The paper presents automation of the process of preparation, storage and monitoring of version control of a text designer, and program documentation by means of the specialized software is considered. Automation of preparation of documentation is based on processing of the engineering data which are contained in the specifications and technical documentation or in the specification. Data handling assumes existence of strictly structured electronic documents prepared in widespread formats according to templates on the basis of industry standards and generation by an automated method of the program or designer text document. Further life cycle of the document and engineering data entering it are controlled. At each stage of life cycle, archive data storage is carried out. Studies of high-speed performance of use of different widespread document formats in case of automated monitoring and storage are given. The new developed software and the work benches available to the developer of the instrumental equipment are described.

  3. Multimodal versus Unimodal Instruction in a Complex Learning Context.

    ERIC Educational Resources Information Center

    Gellevij, Mark; van der Meij, Hans; de Jong, Ton; Pieters, Jules

    2002-01-01

    Compared multimodal instruction with text and pictures with unimodal text-only instruction as 44 college students used a visual or textual manual to learn a complex software application. Results initially support dual coding theory and indicate that multimodal instruction led to better performance than unimodal instruction. (SLD)

  4. Developing ASL Text in the Bilingual Classroom

    ERIC Educational Resources Information Center

    Baer, Joey; Osbrink, Rory

    2015-01-01

    Deaf students are visual learners, and technology should be part of every bilingual classroom. However, deaf students need to learn to manipulate the hardware and software that allows them to express themselves and to advance their knowledge. Students need to understand what is meant when they are referred to "ASL text" or…

  5. Supporting Struggling Readers in Secondary School Science Classes

    ERIC Educational Resources Information Center

    Roberts, Kelly D.; Takahashi, Kiriko; Park, Hye-Jin; Stodden, Robert A.

    2012-01-01

    Many secondary school students struggle to read complex expository text such as science textbooks. This article provides step-by-step guidance on how to foster expository reading for struggling readers in secondary school science classes. Two strategies are introduced: Text-to-Speech (TTS) Software as a reading compensatory strategy and the…

  6. Development of new vibration energy flow analysis software and its applications to vehicle systems

    NASA Astrophysics Data System (ADS)

    Kim, D.-J.; Hong, S.-Y.; Park, Y.-H.

    2005-09-01

    The Energy flow analysis (EFA) offers very promising results in predicting the noise and vibration responses of system structures in medium-to-high frequency ranges. We have developed the Energy flow finite element method (EFFEM) based software, EFADSC++ R4, for the vibration analysis. The software can analyze the system structures composed of beam, plate, spring-damper, rigid body elements and many other components developed, and has many useful functions in analysis. For convenient use of the software, the main functions of the whole software are modularized into translator, model-converter, and solver. The translator module makes it possible to use finite element (FE) model for the vibration analysis. The model-converter module changes FE model into energy flow finite element (EFFE) model, and generates joint elements to cover the vibrational attenuation in the complex structures composed of various elements and can solve the joint element equations by using the wave tra! nsmission approach very quickly. The solver module supports the various direct and iterative solvers for multi-DOF structures. The predictions of vibration for real vehicles by using the developed software were performed successfully.

  7. The Implication of Using NVivo Software in Qualitative Data Analysis: Evidence-Based Reflections.

    PubMed

    Zamawe, F C

    2015-03-01

    For a long time, electronic data analysis has been associated with quantitative methods. However, Computer Assisted Qualitative Data Analysis Software (CAQDAS) are increasingly being developed. Although the CAQDAS has been there for decades, very few qualitative health researchers report using it. This may be due to the difficulties that one has to go through to master the software and the misconceptions that are associated with using CAQDAS. While the issue of mastering CAQDAS has received ample attention, little has been done to address the misconceptions associated with CAQDAS. In this paper, the author reflects on his experience of interacting with one of the popular CAQDAS (NVivo) in order to provide evidence-based implications of using the software. The key message is that unlike statistical software, the main function of CAQDAS is not to analyse data but rather to aid the analysis process, which the researcher must always remain in control of. In other words, researchers must equally know that no software can analyse qualitative data. CAQDAS are basically data management packages, which support the researcher during analysis.

  8. Facile one-pot synthesis of hexagons of NaSrB5O9:Tb3+ phosphor for solid-state lighting

    NASA Astrophysics Data System (ADS)

    Ramesh, B.; Dillip, G. R.; Deva Prasad Raju, B.; Somasundaram, K.; Prasad Peddi, Siva; de Carvalho dos Anjos, Virgilio; Joo, S. W.

    2017-04-01

    NaSrB5O9:Tb3+ hexagons were synthesized by a facile solid-state reaction method. The synthesized powders were structurally examined by x-ray diffraction analysis (XRD), and Rietveld refinement was performed using the XRD data and Fullprof software. Hexagon-like morphology was observed using field emission scanning electron microscopy (FESEM) and transmission electron microscopy (TEM). The elemental composition of the phosphors was investigated qualitatively by energy dispersive x-ray analysis (EDS) and quantitatively by x-ray photoelectron spectroscopy (XPS). The phosphor has a strong green emission at 545 nm under excitation of 379 nm, which is due to the 5{{\\text{D}}4}{{\\to}7}{{\\text{F}}5} transition of the Tb3+ ion. A lifetime of 3.48 ms was obtained for the phosphor. The important parameters of the light source were determined, such as the thermal quenching, critical distance, the nature of the dopant ion interaction, color coordinates, and quantum yield values. Other reported properties include the site occupancy of the dopant, surface properties, morphological properties, and optical properties.

  9. Description of the GMAO OSSE for Weather Analysis Software Package: Version 3

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.; hide

    2017-01-01

    The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.

  10. Analytical and Methodological Issues in the Use of Qualitative Data Analysis Software: A Description of Three Studies.

    ERIC Educational Resources Information Center

    Margerum-Leys, Jon; Kupperman, Jeff; Boyle-Heimann, Kristen

    This paper presents perspectives on the use of data analysis software in the process of qualitative research. These perspectives were gained in the conduct of three qualitative research studies that differed in theoretical frames, areas of interests, and scope. Their common use of a particular data analysis software package allows the exploration…

  11. ElectroMagnetoEncephalography Software: Overview and Integration with Other EEG/MEG Toolboxes

    PubMed Central

    Peyk, Peter; De Cesarei, Andrea; Junghöfer, Markus

    2011-01-01

    EMEGS (electromagnetic encephalography software) is a MATLAB toolbox designed to provide novice as well as expert users in the field of neuroscience with a variety of functions to perform analysis of EEG and MEG data. The software consists of a set of graphical interfaces devoted to preprocessing, analysis, and visualization of electromagnetic data. Moreover, it can be extended using a plug-in interface. Here, an overview of the capabilities of the toolbox is provided, together with a simple tutorial for both a standard ERP analysis and a time-frequency analysis. Latest features and future directions of the software development are presented in the final section. PMID:21577273

  12. ElectroMagnetoEncephalography software: overview and integration with other EEG/MEG toolboxes.

    PubMed

    Peyk, Peter; De Cesarei, Andrea; Junghöfer, Markus

    2011-01-01

    EMEGS (electromagnetic encephalography software) is a MATLAB toolbox designed to provide novice as well as expert users in the field of neuroscience with a variety of functions to perform analysis of EEG and MEG data. The software consists of a set of graphical interfaces devoted to preprocessing, analysis, and visualization of electromagnetic data. Moreover, it can be extended using a plug-in interface. Here, an overview of the capabilities of the toolbox is provided, together with a simple tutorial for both a standard ERP analysis and a time-frequency analysis. Latest features and future directions of the software development are presented in the final section.

  13. Report: Scientific Software.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1985-01-01

    Discusses various aspects of scientific software, including evaluation and selection of commercial software products; program exchanges, catalogs, and other information sources; major data analysis packages; statistics and chemometrics software; and artificial intelligence. (JN)

  14. A Java viewer to publish Digital Imaging and Communications in Medicine (DICOM) radiologic images on the World Wide Web.

    PubMed

    Setti, E; Musumeci, R

    2001-06-01

    The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.

  15. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  16. Geographic Information Systems and Web Page Development

    NASA Technical Reports Server (NTRS)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIS. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre" which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. GIS can be broken down into two main categories, urban GIS and natural resource GIS. Further still, natural resource GIS can be broken down into six sub-categories, agriculture, forestry, wildlife, catchment management, archaeology, and geology/mining. Agriculture GIS has several applications, such as agricultural capability analysis, land conservation, market analysis, or whole farming planning. Forestry GIs can be used for timber assessment and management, harvest scheduling and planning, environmental impact assessment, and pest management. GIS when used in wildlife applications enables the user to assess and manage habitats, identify and track endangered and rare species, and monitor impact assessment.

  17. Mining biomedical images towards valuable information retrieval in biomedical and life sciences.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.

  18. Why people download the freeware AIDA v4.3a diabetes software program: a proof-of-concept semi-automated analysis.

    PubMed

    Lehmann, Eldon D

    2003-01-01

    AIDA is a diabetes-computing program freely available at www.2aida.org on the Web. The software is intended to serve as an educational support tool and can be used by anyone who has an interest in diabetes, whether they be patients, relatives, health-care professionals, or students. In previous "Diabetes Information Technology & WebWatch" columns various indicators of usage of the AIDA program have been reviewed, and various comments from users of the software have been documented. The purpose of this column is to overview a proof-of-concept semi-automated analysis about why people are downloading the latest version of the AIDA educational diabetes program. AIDA permits the interactive simulation of plasma insulin and blood glucose profiles for teaching, demonstration, self-learning, and research purposes. It has been made freely available, without charge, on the Internet as a noncommercial contribution to continuing diabetes education. Since its launch in 1996 over 300,000 visits have been logged at the main AIDA Website-www.2aida.org-and over 60,000 copies of the AIDA program have been downloaded free-of-charge. This column documents the results of a semi-automated analysis of comments left by Website visitors while they were downloading the AIDA software, before they had a chance to use the program. The Internet-based survey methodology and semi-automated analysis were both found to be robust and reliable. Over a 5-month period (from October 3, 2001 to February 28, 2002) 400 responses were received. During the corresponding period 1,770 actual visits were made to the Website survey page-giving a response rate to this proof-of-concept study of 22.6%. Responses were received from participants in over 54 countries-with nearly half of these (n = 194; 48.5%) originating from the United States, United Kingdom, and Canada; 208 responses (52.0%) were received from patients with diabetes, 50 (12.5%) from doctors, 49 (12.3%) from relatives of patients, with fewer responses from students, diabetes educators, nurses, pharmacists, and other end users. The semi-automated analysis adopted for this study has re-affirmed the feasibility of using the Internet to obtain free-text comments, at no real cost, from a substantial number of medical software downloaders/users. The survey has also offered some insight into why members of the public continue to turn to the Internet for medical information. Furthermore it has provided useful information about why people are actually downloading the AIDA v4.3a interactive educational "virtual diabetes patient" simulator.

  19. Software selection based on analysis and forecasting methods, practised in 1C

    NASA Astrophysics Data System (ADS)

    Vazhdaev, A. N.; Chernysheva, T. Y.; Lisacheva, E. I.

    2015-09-01

    The research focuses on the problem of a “1C: Enterprise 8” platform inboard mechanisms for data analysis and forecasting. It is important to evaluate and select proper software to develop effective strategies for customer relationship management in terms of sales, as well as implementation and further maintenance of software. Research data allows creating new forecast models to schedule further software distribution.

  20. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  1. Software ion scan functions in analysis of glycomic and lipidomic MS/MS datasets.

    PubMed

    Haramija, Marko

    2018-03-01

    Hardware ion scan functions unique to tandem mass spectrometry (MS/MS) mode of data acquisition, such as precursor ion scan (PIS) and neutral loss scan (NLS), are important for selective extraction of key structural data from complex MS/MS spectra. However, their software counterparts, software ion scan (SIS) functions, are still not regularly available. Software ion scan functions can be easily coded for additional functionalities, such as software multiple precursor ion scan, software no ion scan, and software variable ion scan functions. These are often necessary, since they allow more efficient analysis of complex MS/MS datasets, often encountered in glycomics and lipidomics. Software ion scan functions can be easily coded by using modern script languages and can be independent of instrument manufacturer. Here we demonstrate the utility of SIS functions on a medium-size glycomic MS/MS dataset. Knowledge of sample properties, as well as of diagnostic and conditional diagnostic ions crucial for data analysis, was needed. Based on the tables constructed with the output data from the SIS functions performed, a detailed analysis of a complex MS/MS glycomic dataset could be carried out in a quick, accurate, and efficient manner. Glycomic research is progressing slowly, and with respect to the MS experiments, one of the key obstacles for moving forward is the lack of appropriate bioinformatic tools necessary for fast analysis of glycomic MS/MS datasets. Adding novel SIS functionalities to the glycomic MS/MS toolbox has a potential to significantly speed up the glycomic data analysis process. Similar tools are useful for analysis of lipidomic MS/MS datasets as well, as will be discussed briefly. Copyright © 2017 John Wiley & Sons, Ltd.

  2. New software for statistical analysis of Cambridge Structural Database data

    PubMed Central

    Sykes, Richard A.; McCabe, Patrick; Allen, Frank H.; Battle, Gary M.; Bruno, Ian J.; Wood, Peter A.

    2011-01-01

    A collection of new software tools is presented for the analysis of geometrical, chemical and crystallographic data from the Cambridge Structural Database (CSD). This software supersedes the program Vista. The new functionality is integrated into the program Mercury in order to provide statistical, charting and plotting options alongside three-dimensional structural visualization and analysis. The integration also permits immediate access to other information about specific CSD entries through the Mercury framework, a common requirement in CSD data analyses. In addition, the new software includes a range of more advanced features focused towards structural analysis such as principal components analysis, cone-angle correction in hydrogen-bond analyses and the ability to deal with topological symmetry that may be exhibited in molecular search fragments. PMID:22477784

  3. Development of Cell Analysis Software for Cultivated Corneal Endothelial Cells.

    PubMed

    Okumura, Naoki; Ishida, Naoya; Kakutani, Kazuya; Hongo, Akane; Hiwa, Satoru; Hiroyasu, Tomoyuki; Koizumi, Noriko

    2017-11-01

    To develop analysis software for cultured human corneal endothelial cells (HCECs). Software was designed to recognize cell borders and to provide parameters such as cell density, coefficient of variation, and polygonality of cultured HCECs based on phase contrast images. Cultured HCECs with high or low cell density were incubated with Ca-free and Mg-free phosphate-buffered saline for 10 minutes to reveal the cell borders and were then analyzed with software (n = 50). Phase contrast images showed that cell borders were not distinctly outlined, but these borders became more distinctly outlined after phosphate-buffered saline treatment and were recognized by cell analysis software. The cell density value provided by software was similar to that obtained using manual cell counting by an experienced researcher. Morphometric parameters, such as the coefficient of variation and polygonality, were also produced by software, and these values were significantly correlated with cell density (Pearson correlation coefficients -0.62 and 0.63, respectively). The software described here provides morphometric information from phase contrast images, and it enables subjective and noninvasive quality assessment for tissue engineering therapy of the corneal endothelium.

  4. Team Software Development for Aerothermodynamic and Aerodynamic Analysis and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, N.; Atkins, H. L.; Bibb, K. L.; Biedron, R. T.; Carpenter, M. H.; Gnoffo, P. A.; Hammond, D. P.; Jones, W. T.; Kleb, W. L.; Lee-Rausch, E. M.

    2003-01-01

    A collaborative approach to software development is described. The approach employs the agile development techniques: project retrospectives, Scrum status meetings, and elements of Extreme Programming to efficiently develop a cohesive and extensible software suite. The software product under development is a fluid dynamics simulator for performing aerodynamic and aerothermodynamic analysis and design. The functionality of the software product is achieved both through the merging, with substantial rewrite, of separate legacy codes and the authorship of new routines. Examples of rapid implementation of new functionality demonstrate the benefits obtained with this agile software development process. The appendix contains a discussion of coding issues encountered while porting legacy Fortran 77 code to Fortran 95, software design principles, and a Fortran 95 coding standard.

  5. Using Combined SFTA and SFMECA Techniques for Space Critical Software

    NASA Astrophysics Data System (ADS)

    Nicodemos, F. G.; Lahoz, C. H. N.; Abdala, M. A. D.; Saotome, O.

    2012-01-01

    This work addresses the combined Software Fault Tree Analysis (SFTA) and Software Failure Modes, Effects and Criticality Analysis (SFMECA) techniques applied to space critical software of satellite launch vehicles. The combined approach is under research as part of the Verification and Validation (V&V) efforts to increase software dependability and as future application in other projects under development at Instituto de Aeronáutica e Espaço (IAE). The applicability of such approach was conducted on system software specification and applied to a case study based on the Brazilian Satellite Launcher (VLS). The main goal is to identify possible failure causes and obtain compensating provisions that lead to inclusion of new functional and non-functional system software requirements.

  6. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  7. Automated detection of follow-up appointments using text mining of discharge records.

    PubMed

    Ruud, Kari L; Johnson, Matthew G; Liesinger, Juliette T; Grafft, Carrie A; Naessens, James M

    2010-06-01

    To determine whether text mining can accurately detect specific follow-up appointment criteria in free-text hospital discharge records. Cross-sectional study. Mayo Clinic Rochester hospitals. Inpatients discharged from general medicine services in 2006 (n = 6481). Textual hospital dismissal summaries were manually reviewed to determine whether the records contained specific follow-up appointment arrangement elements: date, time and either physician or location for an appointment. The data set was evaluated for the same criteria using SAS Text Miner software. The two assessments were compared to determine the accuracy of text mining for detecting records containing follow-up appointment arrangements. Agreement of text-mined appointment findings with gold standard (manual abstraction) including sensitivity, specificity, positive predictive and negative predictive values (PPV and NPV). About 55.2% (3576) of discharge records contained all criteria for follow-up appointment arrangements according to the manual review, 3.2% (113) of which were missed through text mining. Text mining incorrectly identified 3.7% (107) follow-up appointments that were not considered valid through manual review. Therefore, the text mining analysis concurred with the manual review in 96.6% of the appointment findings. Overall sensitivity and specificity were 96.8 and 96.3%, respectively; and PPV and NPV were 97.0 and 96.1%, respectively. of individual appointment criteria resulted in accuracy rates of 93.5% for date, 97.4% for time, 97.5% for physician and 82.9% for location. Text mining of unstructured hospital dismissal summaries can accurately detect documentation of follow-up appointment arrangement elements, thus saving considerable resources for performance assessment and quality-related research.

  8. Software use cases to elicit the software requirements analysis within the ASTRI project

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Antolini, Elisa; Bonnoli, Giacomo; Bruno, Pietro; Bulgarelli, Andrea; Capalbi, Milvia; Fioretti, Valentina; Fugazza, Dino; Gardiol, Daniele; Grillo, Alessandro; Leto, Giuseppe; Lombardi, Saverio; Lucarelli, Fabrizio; Maccarone, Maria Concetta; Malaguti, Giuseppe; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Schwarz, Joseph; Scuderi, Salvatore; Tanci, Claudio; Tosti, Gino; Trifoglio, Massimo; Vercellone, Stefano; Zanmar Sanchez, Ricardo

    2016-07-01

    The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF "M.G. Fracastoro" Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered common agreement about system requirements, defined the normal and alternative course of events, understood better the business process, and defined the system test to ensure that the delivered software works properly. We present a summary of the ASTRI SST-2M prototype use cases, and how the lessons learned can be exploited for the ASTRI mini-array proposed for the CTA Observatory.

  9. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  10. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    NASA Astrophysics Data System (ADS)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  11. BamTools: a C++ API and toolkit for analyzing and managing BAM files.

    PubMed

    Barnett, Derek W; Garrison, Erik K; Quinlan, Aaron R; Strömberg, Michael P; Marth, Gabor T

    2011-06-15

    Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools.

  12. From damage to discovery via virtual unwrapping: Reading the scroll from En-Gedi

    PubMed Central

    Seales, William Brent; Parker, Clifford Seth; Segal, Michael; Tov, Emanuel; Shor, Pnina; Porath, Yosef

    2016-01-01

    Computer imaging techniques are commonly used to preserve and share readable manuscripts, but capturing writing locked away in ancient, deteriorated documents poses an entirely different challenge. This software pipeline—referred to as “virtual unwrapping”—allows textual artifacts to be read completely and noninvasively. The systematic digital analysis of the extremely fragile En-Gedi scroll (the oldest Pentateuchal scroll in Hebrew outside of the Dead Sea Scrolls) reveals the writing hidden on its untouchable, disintegrating sheets. Our approach for recovering substantial ink-based text from a damaged object results in readable columns at such high quality that serious critical textual analysis can occur. Hence, this work creates a new pathway for subsequent textual discoveries buried within the confines of damaged materials. PMID:27679821

  13. A Survey of Bioinformatics Database and Software Usage through Mining the Literature.

    PubMed

    Duck, Geraint; Nenadic, Goran; Filannino, Michele; Brass, Andy; Robertson, David L; Stevens, Robert

    2016-01-01

    Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT), though some are instead seeing rapid growth (e.g., the GO, R). We find a striking imbalance in resource usage with the top 5% of resource names (133 names) accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.

  14. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  15. Analysis of Software Systems for Specialized Computers,

    DTIC Science & Technology

    computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)

  16. Flexible Software Architecture for Visualization and Seismic Data Analysis

    NASA Astrophysics Data System (ADS)

    Petunin, S.; Pavlov, I.; Mogilenskikh, D.; Podzyuban, D.; Arkhipov, A.; Baturuin, N.; Lisin, A.; Smith, A.; Rivers, W.; Harben, P.

    2007-12-01

    Research in the field of seismology requires software and signal processing utilities for seismogram manipulation and analysis. Seismologists and data analysts often encounter a major problem in the use of any particular software application specific to seismic data analysis: the tuning of commands and windows to the specific waveforms and hot key combinations so as to fit their familiar informational environment. The ability to modify the user's interface independently from the developer requires an adaptive code structure. An adaptive code structure also allows for expansion of software capabilities such as new signal processing modules and implementation of more efficient algorithms. Our approach is to use a flexible "open" architecture for development of geophysical software. This report presents an integrated solution for organizing a logical software architecture based on the Unix version of the Geotool software implemented on the Microsoft NET 2.0 platform. Selection of this platform greatly expands the variety and number of computers that can implement the software, including laptops that can be utilized in field conditions. It also facilitates implementation of communication functions for seismic data requests from remote databases through the Internet. The main principle of the new architecture for Geotool is that scientists should be able to add new routines for digital waveform analysis via software plug-ins that utilize the basic Geotool display for GUI interaction. The use of plug-ins allows the efficient integration of diverse signal-processing software, including software still in preliminary development, into an organized platform without changing the fundamental structure of that platform itself. An analyst's use of Geotool is tracked via a metadata file so that future studies can reconstruct, and alter, the original signal processing operations. The work has been completed in the framework of a joint Russian- American project.

  17. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    PubMed

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  18. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments

    PubMed Central

    Tian, YingLi; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2012-01-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. PMID:23630409

  19. Influence analysis of Github repositories.

    PubMed

    Hu, Yan; Zhang, Jun; Bai, Xiaomei; Yu, Shuo; Yang, Zhuo

    2016-01-01

    With the support of cloud computing techniques, social coding platforms have changed the style of software development. Github is now the most popular social coding platform and project hosting service. Software developers of various levels keep entering Github, and use Github to save their public and private software projects. The large amounts of software developers and software repositories on Github are posing new challenges to the world of software engineering. This paper tries to tackle one of the important problems: analyzing the importance and influence of Github repositories. We proposed a HITS based influence analysis on graphs that represent the star relationship between Github users and repositories. A weighted version of HITS is applied to the overall star graph, and generates a different set of top influential repositories other than the results from standard version of HITS algorithm. We also conduct the influential analysis on per-month star graph, and study the monthly influence ranking of top repositories.

  20. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chaudhuri, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  1. "To Gloss or Not To Gloss": An Investigation of Reading Comprehension Online.

    ERIC Educational Resources Information Center

    Lomika, Lara L.

    1998-01-01

    Investigated effects of multimedia reading software on reading comprehension. Twelve college students enrolled in a second semester French course were instructed to think aloud during reading of text on the computer screen. They read text under one of three conditions: full glossing, limited glossing, no glossing. Suggests computerized reading…

  2. Validation Study of Waray Text Readability Instrument

    ERIC Educational Resources Information Center

    Oyzon, Voltaire Q.; Corrales, Juven B.; Estardo, Wilfredo M., Jr.

    2015-01-01

    In 2012 the Leyte Normal University developed a computer software--modelled after the Spache Readability Formula (1953) made for English--made to help rank texts that can is used by teachers or research groups on selecting appropriate reading materials to support the DepEd's MTB-MLE program in Region VIII, in the Philippines. However,…

  3. A Text-Computer Assisted Instruction Program as a Viable Alternative for Continuing Education in Laboratory Medicine.

    ERIC Educational Resources Information Center

    Bruce, A. Wayne

    1986-01-01

    Describes reasons for developing combined text and computer assisted instruction (CAI) teaching programs for delivery of continuing education to laboratory professionals, and mechanisms used for developing a CAI program on method evaluation in the clinical laboratory. Results of an evaluation of the software's cost effectiveness and instructional…

  4. Access to Full-Text Journal Articles: Some Practical Considerations.

    ERIC Educational Resources Information Center

    Racine, Drew

    1992-01-01

    Discusses some of the issues and problems involved in using electronic full-text journals in the University of Texas at Austin libraries. Issues addressed include different standards; subjects represented and the time period covered; hardware considerations; software; printing; skills and training needed for library staff and for users; and costs.…

  5. CART (Communication Access Realtime Translation). PEPNet Tipsheet

    ERIC Educational Resources Information Center

    Larson, Judy, Comp.

    1999-01-01

    Communication Access Realtime Translation--(CART)--is the instant translation of the spoken word into English text performed by a CART reporter using a stenotype machine, notebook computer and realtime software. The text is then displayed on a computer monitor or other display device for the student who is deaf or hard of hearing to read. This…

  6. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  7. #LancerHealth: Using Twitter and Instagram as a tool in a campus wide health promotion initiative.

    PubMed

    Santarossa, Sara; Woodruff, Sarah J

    2018-02-05

    The present study aimed to explore using popular technology that people already have/use as a health promotion tool, in a campus wide social media health promotion initiative, entitled #LancerHealth . During a two-week period the university community was asked to share photos on Twitter and Instagram of What does being healthy on campus look like to you ?, while tagging the image with #LancerHealth . All publically tagged media was collected using the Netlytic software and analysed. Text analysis (N=234 records, Twitter; N=141 records, Instagram) revealed that the majority of the conversation was positive and focused on health and the university. Social network analysis, based on five network properties, showed a small network with little interaction. Lastly, photo coding analysis (N=71 unique image) indicated that the majority of the shared images were of physical activity (52%) and on campus (80%). Further research into this area is warranted.

  8. Experience report: Using formal methods for requirements analysis of critical spacecraft software

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Ampo, Yoko

    1994-01-01

    Formal specification and analysis of requirements continues to gain support as a method for producing more reliable software. However, the introduction of formal methods to a large software project is difficult, due in part to the unfamiliarity of the specification languages and the lack of graphics. This paper reports results of an investigation into the effectiveness of formal methods as an aid to the requirements analysis of critical, system-level fault-protection software on a spacecraft currently under development. Our experience indicates that formal specification and analysis can enhance the accuracy of the requirements and add assurance prior to design development in this domain. The work described here is part of a larger, NASA-funded research project whose purpose is to use formal-methods techniques to improve the quality of software in space applications. The demonstration project described here is part of the effort to evaluate experimentally the effectiveness of supplementing traditional engineering approaches to requirements specification with the more rigorous specification and analysis available with formal methods.

  9. Software design for analysis of multichannel intracardial and body surface electrocardiograms.

    PubMed

    Potse, Mark; Linnenbank, André C; Grimbergen, Cornelis A

    2002-11-01

    Analysis of multichannel ECG recordings (body surface maps (BSMs) and intracardial maps) requires special software. We created a software package and a user interface on top of a commercial data analysis package (MATLAB) by a combination of high-level and low-level programming. Our software was created to satisfy the needs of a diverse group of researchers. It can handle a large variety of recording configurations. It allows for interactive usage through a fast and robust user interface, and batch processing for the analysis of large amounts of data. The package is user-extensible, includes routines for both common and experimental data processing tasks, and works on several computer platforms. The source code is made intelligible using software for structured documentation and is available to the users. The package is currently used by more than ten research groups analysing ECG data worldwide.

  10. Learning from examples - Generation and evaluation of decision trees for software resource analysis

    NASA Technical Reports Server (NTRS)

    Selby, Richard W.; Porter, Adam A.

    1988-01-01

    A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.

  11. 78 FR 1162 - Cardiovascular Devices; Reclassification of External Cardiac Compressor

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    ... safety and electromagnetic compatibility; For devices containing software, software verification... electromagnetic compatibility; For devices containing software, software verification, validation, and hazard... electrical components, appropriate analysis and testing must validate electrical safety and electromagnetic...

  12. FASEA: A FPGA Acquisition System and Software Event Analysis for liquid scintillation counting

    NASA Astrophysics Data System (ADS)

    Steele, T.; Mo, L.; Bignell, L.; Smith, M.; Alexiev, D.

    2009-10-01

    The FASEA (FPGA based Acquisition and Software Event Analysis) system has been developed to replace the MAC3 for coincidence pulse processing. The system uses a National Instruments Virtex 5 FPGA card (PXI-7842R) for data acquisition and a purpose developed data analysis software for data analysis. Initial comparisons to the MAC3 unit are included based on measurements of 89Sr and 3H, confirming that the system is able to accurately emulate the behaviour of the MAC3 unit.

  13. A User's Guide to the Meta-Analysis of Research Studies. Meta-Stat: Software To Aid in the Meta-Analysis of Research Findings.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Glass Gene V.; Evartt, David L.; Emery, Patrick J.

    This manual and the accompanying software are intended to provide a step-by-step guide to conducting a meta-analytic study along with references for further reading and free high-quality software, "Meta-Stat.""Meta-Stat" is a comprehensive package designed to help in the meta-analysis of research studies in the social and behavioral sciences.…

  14. Development of an automated asbestos counting software based on fluorescence microscopy.

    PubMed

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  15. IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.

    PubMed

    Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M

    2016-04-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier. © 2015 Society for Laboratory Automation and Screening.

  16. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.

    PubMed

    Desai, Trunil S; Srivastava, Shireesh

    2018-01-01

    13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.

  17. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses

    PubMed Central

    Desai, Trunil S.

    2018-01-01

    13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347

  18. Reliability of skeletal maturity analysis using the cervical vertebrae maturation method on dedicated software.

    PubMed

    Padalino, Saverio; Sfondrini, Maria Francesca; Chenuil, Laura; Scudeller, Luigia; Gandini, Paola

    2014-12-01

    The aim of this study was to assess the feasibility of skeletal maturation analysis using the Cervical Vertebrae Maturation (CVM) method by means of dedicated software, developed in collaboration with Outside Format (Paullo-Milan), as compared with manual analysis. From a sample of patients aged 7-21 years, we gathered 100 lateral cephalograms, 20 for each of the five CVM stages. For each cephalogram, we traced cervical vertebrae C2, C3 and C4 by hand using a lead pencil and an acetate sheet and dedicated software. All the tracings were made by an experienced operator (a dentofacial orthopedics resident) and by an inexperienced operator (a student in dental surgery). Each operator recorded the time needed to make each tracing in order to demonstrate differences in the times taken. Concordance between the manual analysis and the analysis performed using the dedicated software was 94% for the resident and 93% for the student. Interobserver concordance was 99%. The hand-tracing was quicker than that performed by means of the software (28 seconds more on average). The cervical vertebrae analysis software offers excellent clinical performance, even if the method takes longer than the manual technique. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  19. Is it really so bad? A comparison of positive and negative experiences in Antarctic winter stations

    NASA Technical Reports Server (NTRS)

    Wood, J.; Hysong, S. J.; Lugg, D. J.; Harm, D. L.

    2000-01-01

    This study examined the range of positive and negative themes reported by 104 Australian Antarctic winter personnel at four stations during two austral winters. Reports from the expeditioners were subjected to a content analysis using the TextSmart software from SPSS, Inc. Results indicated that, although the list of negative experiences is lengthy, most events are relatively rare. On the other hand, although the list of positive experiences is short, the frequencies with which they are reported are much greater than for most of the problems. Possible explanations for these themes and for future directions are discussed.

  20. Methodology to build medical ontology from textual resources.

    PubMed

    Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine

    2006-01-01

    In the medical field, it is now established that the maintenance of unambiguous thesauri goes through ontologies. Our research task is to help pneumologists code acts and diagnoses with a software that represents medical knowledge through a domain ontology. In this paper, we describe our general methodology aimed at knowledge engineers in order to build various types of medical ontologies based on terminology extraction from texts. The hypothesis is to apply natural language processing tools to textual patient discharge summaries to develop the resources needed to build an ontology in pneumology. Results indicate that the joint use of distributional analysis and lexico-syntactic patterns performed satisfactorily for building such ontologies.

  1. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  2. Novel features and enhancements in BioBin, a tool for the biologically inspired binning and association analysis of rare variants

    PubMed Central

    Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D

    2018-01-01

    Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757

  3. Development problem analysis of correlation leak detector’s software

    NASA Astrophysics Data System (ADS)

    Faerman, V. A.; Avramchuk, V. S.; Marukyan, V. M.

    2018-05-01

    In the article, the practical application and the structure of the correlation leak detectors’ software is studied and the task of its designing is analyzed. In the first part of the research paper, the expediency of the facilities development of correlation leak detectors for the following operating efficiency of public utilities exploitation is shown. The analysis of the functional structure of correlation leak detectors is conducted and its program software tasks are defined. In the second part of the research paper some development steps of the software package – requirement forming, program structure definition and software concept creation – are examined in the context of the usage experience of the hardware-software prototype of correlation leak detector.

  4. Software development tools: A bibliography, appendix C.

    NASA Technical Reports Server (NTRS)

    Riddle, W. E.

    1980-01-01

    A bibliography containing approximately 200 citations on tools which help software developers perform some development task (such as text manipulation, testing, etc.), and which would not necessarily be found as part of a computing facility is given. The bibliography comes from a relatively random sampling of the literature and is not complete. But it is indicative of the nature and range of tools currently being prepared or currently available.

  5. ThermalTracker Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The software processes recorded thermal video and detects the flight tracks of birds and bats that passed through the camera's field of view. The output is a set of images that show complete flight tracks for any detections, with the direction of travel indicated and the thermal image of the animal delineated. A report of the descriptive features of each detected track is also output in the form of a comma-separated value text file.

  6. Countering Insider Threats - Handling Insider Threats Using Dynamic, Run-Time Forensics

    DTIC Science & Technology

    2007-10-01

    able to handle the security policy requirements of a large organization containing many decentralized and diverse users, while being easily managed... contained in the TIF folder. Searching for any text string and sorting is supported also. The cache index file of Internet Explorer is not changed... containing thousands of malware software signatures. Separate datasets can be created for various classifications of malware such as encryption software

  7. Tech notes: Ongoing or planned hydro research, results of recent studies, and reviews of new books, publications, and software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Ongoing or planned hydro research, results of recent studies, and reviews of new books, publications, and software are covered. This month`s Tech Notes include: (1) a study linking development and reservoir silting in El Salvador, (2) publication of a guide for small hydro operatos, and (3) publication of a text outlining the development of hydroelectric power in Russia.

  8. Improved Load Alleviation Capability for the KC-135

    DTIC Science & Technology

    1997-09-01

    software, such as Matlab, Mathematica, Simulink, and Robotica Front End for Mathematica available in the simulation laboratory Overview This thesis report is...outlined in Spong’s text in order to utilize the Robotica system development software which automates the process of calculating the kinematic and...kinematic and dynamic equations can be accomplished using a computer tool called Robotica Front End (RFE) [ 15], developed by Doctor Spong. Boom Root d3

  9. Evaluation of a deidentification (De-Id) software engine to share pathology reports and clinical documents for research.

    PubMed

    Gupta, Dilip; Saul, Melissa; Gilbertson, John

    2004-02-01

    We evaluated a comprehensive deidentification engine at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, that uses a complex set of rules, dictionaries, pattern-matching algorithms, and the Unified Medical Language System to identify and replace identifying text in clinical reports while preserving medical information for sharing in research. In our initial data set of 967 surgical pathology reports, the software did not suppress outside (103), UPMC (47), and non-UPMC (56) accession numbers; dates (7); names (9) or initials (25) of case pathologists; or hospital or laboratory names (46). In 150 reports, some clinical information was suppressed inadvertently (overmarking). The engine retained eponymic patient names, eg, Barrett and Gleason. In the second evaluation (1,000 reports), the software did not suppress outside (90) or UPMC (6) accession numbers or names (4) or initials (2) of case pathologists. In the third evaluation, the software removed names of patients, hospitals (297/300), pathologists (297/300), transcriptionists, residents and physicians, dates of procedures, and accession numbers (298/300). By the end of the evaluation, the system was reliably and specifically removing safe-harbor identifiers and producing highly readable deidentified text without removing important clinical information. Collaboration between pathology domain experts and system developers and continuous quality assurance are needed to optimize ongoing deidentification processes.

  10. relaxGUI: a new software for fast and simple NMR relaxation data analysis and calculation of ps-ns and μs motion of proteins.

    PubMed

    Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R

    2011-06-01

    Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.

  11. Performance of Different Analytical Software Packages in Quantification of DNA Methylation by Pyrosequencing.

    PubMed

    Grasso, Chiara; Trevisan, Morena; Fiano, Valentina; Tarallo, Valentina; De Marco, Laura; Sacerdote, Carlotta; Richiardi, Lorenzo; Merletti, Franco; Gillio-Tos, Anna

    2016-01-01

    Pyrosequencing has emerged as an alternative method of nucleic acid sequencing, well suited for many applications which aim to characterize single nucleotide polymorphisms, mutations, microbial types and CpG methylation in the target DNA. The commercially available pyrosequencing systems can harbor two different types of software which allow analysis in AQ or CpG mode, respectively, both widely employed for DNA methylation analysis. Aim of the study was to assess the performance for DNA methylation analysis at CpG sites of the two pyrosequencing software which allow analysis in AQ or CpG mode, respectively. Despite CpG mode having been specifically generated for CpG methylation quantification, many investigations on this topic have been carried out with AQ mode. As proof of equivalent performance of the two software for this type of analysis is not available, the focus of this paper was to evaluate if the two modes currently used for CpG methylation assessment by pyrosequencing may give overlapping results. We compared the performance of the two software in quantifying DNA methylation in the promoter of selected genes (GSTP1, MGMT, LINE-1) by testing two case series which include DNA from paraffin embedded prostate cancer tissues (PC study, N = 36) and DNA from blood fractions of healthy people (DD study, N = 28), respectively. We found discrepancy in the two pyrosequencing software-based quality assignment of DNA methylation assays. Compared to the software for analysis in the AQ mode, less permissive criteria are supported by the Pyro Q-CpG software, which enables analysis in CpG mode. CpG mode warns the operators about potential unsatisfactory performance of the assay and ensures a more accurate quantitative evaluation of DNA methylation at CpG sites. The implementation of CpG mode is strongly advisable in order to improve the reliability of the methylation analysis results achievable by pyrosequencing.

  12. Desktop software to identify patients eligible for recruitment into a clinical trial: using SARMA to recruit to the ROAD feasibility trial.

    PubMed

    Treweek, Shaun; Pearson, Ewan; Smith, Natalie; Neville, Ron; Sargeant, Paul; Boswell, Brian; Sullivan, Frank

    2010-01-01

    Recruitment to trials in primary care is often difficult, particularly when practice staff need to identify study participants with acute conditions during consultations. The Scottish Acute Recruitment Management Application (SARMA) system is linked to general practice electronic medical record (EMR) systems and is designed to provide recruitment support to multi-centre trials by screening patients against trial inclusion criteria and alerting practice staff if the patient appears eligible. For patients willing to learn more about the trial, the software allows practice staff to send the patient's contact details to the research team by text message. To evaluate the ability of the software to support trial recruitment. Software evaluation embedded in a randomised controlled trial. Five general practices in Tayside and Fife, Scotland. SARMA was used to support recruitment to a feasibility trial (the Response to Oral Agents in Diabetes, or ROAD trial) looking at users of oral therapy in diabetes. The technical performance of the software and its utility as a recruitment tool were evaluated. The software was successfully installed at four of the five general practices and recruited 11 of the 29 participants for ROAD (other methods were letter and direct invitation by a practice nurse) and had a recruitment return of 35% (11 of 31 texts sent led to a recruitment). Screen failures were relatively low (7 of 31 referred). Practice staff members were positive about the system. An automated recruitment tool can support primary care trials in Scotland and has the potential to support recruitment in other jurisdictions. It offers a low-cost supplement to other trial recruitment methods and is likely to have a much lower screen failure rate than blanket approaches such as mailshots and newspaper campaigns.

  13. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment -- San Joaquin Basin (5010): Chapter 28 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the assessment process. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the portable document format (.pdf) files of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  14. Chapter 2: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - The Wind River Basin Province

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files) because of the number and variety of platforms and software available.

  15. Toward Baseline Software Anomalies in NASA Missions

    NASA Technical Reports Server (NTRS)

    Layman, Lucas; Zelkowitz, Marvin; Basili, Victor; Nikora, Allen P.

    2012-01-01

    In this fast abstract, we provide preliminary findings an analysis of 14,500 spacecraft anomalies from unmanned NASA missions. We provide some baselines for the distributions of software vs. non-software anomalies in spaceflight systems, the risk ratings of software anomalies, and the corrective actions associated with software anomalies.

  16. Computing and software

    USGS Publications Warehouse

    White, Gary C.; Hines, J.E.

    2004-01-01

    The reality is that the statistical methods used for analysis of data depend upon the availability of software. Analysis of marked animal data is no different than the rest of the statistical field. The methods used for analysis are those that are available in reliable software packages. Thus, the critical importance of having reliable, up–to–date software available to biologists is obvious. Statisticians have continued to develop more robust models, ever expanding the suite of potential analysis methodsavailable. But without software to implement these newer methods, they will languish in the abstract, and not be applied to the problems deserving them.In the Computers and Software Session, two new software packages are described, a comparison of implementation of methods for the estimation of nest survival is provided, and a more speculative paper about how the next generation of software might be structured is presented.Rotella et al. (2004) compare nest survival estimation with different software packages: SAS logistic regression, SAS non–linear mixed models, and Program MARK. Nests are assumed to be visited at various, possibly infrequent, intervals. All of the approaches described compute nest survival with the same likelihood, and require that the age of the nest is known to account for nests that eventually hatch. However, each approach offers advantages and disadvantages, explored by Rotella et al. (2004).Efford et al. (2004) present a new software package called DENSITY. The package computes population abundance and density from trapping arrays and other detection methods with a new and unique approach. DENSITY represents the first major addition to the analysis of trapping arrays in 20 years.Barker & White (2004) discuss how existing software such as Program MARK require that each new model’s likelihood must be programmed specifically for that model. They wishfully think that future software might allow the user to combine pieces of likelihood functions together to generate estimates. The idea is interesting, and maybe some bright young statistician can work out the specifics to implement the procedure.Choquet et al. (2004) describe MSURGE, a software package that implements the multistate capture–recapture models. The unique feature of MSURGE is that the design matrix is constructed with an interpreted language called GEMACO. Because MSURGE is limited to just multistate models, the special requirements of these likelihoods can be provided.The software and methods presented in these papers gives biologists and wildlife managers an expanding range of possibilities for data analysis. Although ease–of–use is generally getting better, it does not replace the need for understanding of the requirements and structure of the models being computed. The internet provides access to many free software packages as well as user–discussion groups to share knowledge and ideas. (A starting point for wildlife–related applications is (http://www.phidot.org).

  17. USER'S GUIDE: Strategic Waste Minimization Initiative (SWAMI) Version 2.0 - A Software Tool to Aid in Process Analysis for Pollution Prevention

    EPA Science Inventory

    The Strategic WAste Minimization Initiative (SWAMI) Software, Version 2.0 is a tool for using process analysis for identifying waste minimization opportunities within an industrial setting. The software requires user-supplied information for process definition, as well as materia...

  18. Integrating Model-Based Verification into Software Design Education

    ERIC Educational Resources Information Center

    Yilmaz, Levent; Wang, Shuo

    2005-01-01

    Proper design analysis is indispensable to assure quality and reduce emergent costs due to faulty software. Teaching proper design verification skills early during pedagogical development is crucial, as such analysis is the only tractable way of resolving software problems early when they are easy to fix. The premise of the presented strategy is…

  19. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    DTIC Science & Technology

    1992-04-01

    contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more

  20. Orbit Software Suite

    NASA Technical Reports Server (NTRS)

    Osgood, Cathy; Williams, Kevin; Gentry, Philip; Brownfield, Dana; Hallstrom, John; Stuit, Tim

    2012-01-01

    Orbit Software Suite is used to support a variety of NASA/DM (Dependable Multiprocessor) mission planning and analysis activities on the IPS (Intrusion Prevention System) platform. The suite of Orbit software tools (Orbit Design and Orbit Dynamics) resides on IPS/Linux workstations, and is used to perform mission design and analysis tasks corresponding to trajectory/ launch window, rendezvous, and proximity operations flight segments. A list of tools in Orbit Software Suite represents tool versions established during/after the Equipment Rehost-3 Project.

  1. FMT (Flight Software Memory Tracker) For Cassini Spacecraft-Software Engineering Using JAVA

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Uffelman, Hal; Wax, Allan H.

    1997-01-01

    The software engineering design of the Flight Software Memory Tracker (FMT) Tool is discussed in this paper. FMT is a ground analysis software set, consisting of utilities and procedures, designed to track the flight software, i.e., images of memory load and updatable parameters of the computers on-board Cassini spacecraft. FMT is implemented in Java.

  2. Engineering Complex Embedded Systems with State Analysis and the Mission Data System

    NASA Technical Reports Server (NTRS)

    Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.

  3. Pilot Study of an Open-source Image Analysis Software for Automated Screening of Conventional Cervical Smears.

    PubMed

    Sanyal, Parikshit; Ganguli, Prosenjit; Barui, Sanghita; Deb, Prabal

    2018-01-01

    The Pap stained cervical smear is a screening tool for cervical cancer. Commercial systems are used for automated screening of liquid based cervical smears. However, there is no image analysis software used for conventional cervical smears. The aim of this study was to develop and test the diagnostic accuracy of a software for analysis of conventional smears. The software was developed using Python programming language and open source libraries. It was standardized with images from Bethesda Interobserver Reproducibility Project. One hundred and thirty images from smears which were reported Negative for Intraepithelial Lesion or Malignancy (NILM), and 45 images where some abnormality has been reported, were collected from the archives of the hospital. The software was then tested on the images. The software was able to segregate images based on overall nuclear: cytoplasmic ratio, coefficient of variation (CV) in nuclear size, nuclear membrane irregularity, and clustering. 68.88% of abnormal images were flagged by the software, as well as 19.23% of NILM images. The major difficulties faced were segmentation of overlapping cell clusters and separation of neutrophils. The software shows potential as a screening tool for conventional cervical smears; however, further refinement in technique is required.

  4. A workflow learning model to improve geovisual analytics utility

    PubMed Central

    Roth, Robert E; MacEachren, Alan M; McCabe, Craig A

    2011-01-01

    Introduction This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. Objectives The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. Methodology The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. Results/Conclusions In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009. PMID:21983545

  5. A workflow learning model to improve geovisual analytics utility.

    PubMed

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. RESULTS/CONCLUSIONS: In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009.

  6. Lessons Learned from Application of System and Software Level RAMS Analysis to a Space Control System

    NASA Astrophysics Data System (ADS)

    Silva, N.; Esper, A.

    2012-01-01

    The work presented in this article represents the results of applying RAMS analysis to a critical space control system, both at system and software levels. The system level RAMS analysis allowed the assignment of criticalities to the high level components, which was further refined by a tailored software level RAMS analysis. The importance of the software level RAMS analysis in the identification of new failure modes and its impact on the system level RAMS analysis is discussed. Recommendations of changes in the software architecture have also been proposed in order to reduce the criticality of the SW components to an acceptable minimum. The dependability analysis was performed in accordance to ECSS-Q-ST-80, which had to be tailored and complemented in some aspects. This tailoring will also be detailed in the article and lessons learned from the application of this tailoring will be shared, stating the importance to space systems safety evaluations. The paper presents the applied techniques, the relevant results obtained, the effort required for performing the tasks and the planned strategy for ROI estimation, as well as the soft skills required and acquired during these activities.

  7. Off-the-shelf Control of Data Analysis Software

    NASA Astrophysics Data System (ADS)

    Wampler, S.

    The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.

  8. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    PubMed

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  9. Failure-Modes-And-Effects Analysis Of Software Logic

    NASA Technical Reports Server (NTRS)

    Garcia, Danny; Hartline, Thomas; Minor, Terry; Statum, David; Vice, David

    1996-01-01

    Rigorous analysis applied early in design effort. Method of identifying potential inadequacies and modes and effects of failures caused by inadequacies (failure-modes-and-effects analysis or "FMEA" for short) devised for application to software logic.

  10. PubMedPortable: A Framework for Supporting the Development of Text Mining Applications.

    PubMed

    Döring, Kersten; Grüning, Björn A; Telukunta, Kiran K; Thomas, Philippe; Günther, Stefan

    2016-01-01

    Information extraction from biomedical literature is continuously growing in scope and importance. Many tools exist that perform named entity recognition, e.g. of proteins, chemical compounds, and diseases. Furthermore, several approaches deal with the extraction of relations between identified entities. The BioCreative community supports these developments with yearly open challenges, which led to a standardised XML text annotation format called BioC. PubMed provides access to the largest open biomedical literature repository, but there is no unified way of connecting its data to natural language processing tools. Therefore, an appropriate data environment is needed as a basis to combine different software solutions and to develop customised text mining applications. PubMedPortable builds a relational database and a full text index on PubMed citations. It can be applied either to the complete PubMed data set or an arbitrary subset of downloaded PubMed XML files. The software provides the infrastructure to combine stand-alone applications by exporting different data formats, e.g. BioC. The presented workflows show how to use PubMedPortable to retrieve, store, and analyse a disease-specific data set. The provided use cases are well documented in the PubMedPortable wiki. The open-source software library is small, easy to use, and scalable to the user's system requirements. It is freely available for Linux on the web at https://github.com/KerstenDoering/PubMedPortable and for other operating systems as a virtual container. The approach was tested extensively and applied successfully in several projects.

  11. PubMedPortable: A Framework for Supporting the Development of Text Mining Applications

    PubMed Central

    Döring, Kersten; Grüning, Björn A.; Telukunta, Kiran K.; Thomas, Philippe; Günther, Stefan

    2016-01-01

    Information extraction from biomedical literature is continuously growing in scope and importance. Many tools exist that perform named entity recognition, e.g. of proteins, chemical compounds, and diseases. Furthermore, several approaches deal with the extraction of relations between identified entities. The BioCreative community supports these developments with yearly open challenges, which led to a standardised XML text annotation format called BioC. PubMed provides access to the largest open biomedical literature repository, but there is no unified way of connecting its data to natural language processing tools. Therefore, an appropriate data environment is needed as a basis to combine different software solutions and to develop customised text mining applications. PubMedPortable builds a relational database and a full text index on PubMed citations. It can be applied either to the complete PubMed data set or an arbitrary subset of downloaded PubMed XML files. The software provides the infrastructure to combine stand-alone applications by exporting different data formats, e.g. BioC. The presented workflows show how to use PubMedPortable to retrieve, store, and analyse a disease-specific data set. The provided use cases are well documented in the PubMedPortable wiki. The open-source software library is small, easy to use, and scalable to the user’s system requirements. It is freely available for Linux on the web at https://github.com/KerstenDoering/PubMedPortable and for other operating systems as a virtual container. The approach was tested extensively and applied successfully in several projects. PMID:27706202

  12. CAMBerVis: visualization software to support comparative analysis of multiple bacterial strains.

    PubMed

    Woźniak, Michał; Wong, Limsoon; Tiuryn, Jerzy

    2011-12-01

    A number of inconsistencies in genome annotations are documented among bacterial strains. Visualization of the differences may help biologists to make correct decisions in spurious cases. We have developed a visualization tool, CAMBerVis, to support comparative analysis of multiple bacterial strains. The software manages simultaneous visualization of multiple bacterial genomes, enabling visual analysis focused on genome structure annotations. The CAMBerVis software is freely available at the project website: http://bioputer.mimuw.edu.pl/camber. Input datasets for Mycobacterium tuberculosis and Staphylocacus aureus are integrated with the software as examples. m.wozniak@mimuw.edu.pl Supplementary data are available at Bioinformatics online.

  13. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  14. State analysis requirements database for engineering complex embedded systems

    NASA Technical Reports Server (NTRS)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  15. Wnt pathway curation using automated natural language processing: combining statistical methods with partial and full parse for knowledge extraction.

    PubMed

    Santos, Carlos; Eggle, Daniela; States, David J

    2005-04-15

    Wnt signaling is a very active area of research with highly relevant publications appearing at a rate of more than one per day. Building and maintaining databases describing signal transduction networks is a time-consuming and demanding task that requires careful literature analysis and extensive domain-specific knowledge. For instance, more than 50 factors involved in Wnt signal transduction have been identified as of late 2003. In this work we describe a natural language processing (NLP) system that is able to identify references to biological interaction networks in free text and automatically assembles a protein association and interaction map. A 'gold standard' set of names and assertions was derived by manual scanning of the Wnt genes website (http://www.stanford.edu/~rnusse/wntwindow.html) including 53 interactions involved in Wnt signaling. This system was used to analyze a corpus of peer-reviewed articles related to Wnt signaling including 3369 Pubmed and 1230 full text papers. Names for key Wnt-pathway associated proteins and biological entities are identified using a chi-squared analysis of noun phrases over-represented in the Wnt literature as compared to the general signal transduction literature. Interestingly, we identified several instances where generic terms were used on the website when more specific terms occur in the literature, and one typographic error on the Wnt canonical pathway. Using the named entity list and performing an exhaustive assertion extraction of the corpus, 34 of the 53 interactions in the 'gold standard' Wnt signaling set were successfully identified (64% recall). In addition, the automated extraction found several interactions involving key Wnt-related molecules which were missing or different from those in the canonical diagram, and these were confirmed by manual review of the text. These results suggest that a combination of NLP techniques for information extraction can form a useful first-pass tool for assisting human annotation and maintenance of signal pathway databases. The pipeline software components are freely available on request to the authors. dstates@umich.edu http://stateslab.bioinformatics.med.umich.edu/software.html.

  16. Acoustic Emission Analysis Applet (AEAA) Software

    NASA Technical Reports Server (NTRS)

    Nichols, Charles T.; Roth, Don J.

    2013-01-01

    NASA Glenn Research and NASA White Sands Test Facility have developed software supporting an automated pressure vessel structural health monitoring (SHM) system based on acoustic emissions (AE). The software, referred to as the Acoustic Emission Analysis Applet (AEAA), provides analysts with a tool that can interrogate data collected on Digital Wave Corp. and Physical Acoustics Corp. software using a wide spectrum of powerful filters and charts. This software can be made to work with any data once the data format is known. The applet will compute basic AE statistics, and statistics as a function of time and pressure (see figure). AEAA provides value added beyond the analysis provided by the respective vendors' analysis software. The software can handle data sets of unlimited size. A wide variety of government and commercial applications could benefit from this technology, notably requalification and usage tests for compressed gas and hydrogen-fueled vehicles. Future enhancements will add features similar to a "check engine" light on a vehicle. Once installed, the system will ultimately be used to alert International Space Station crewmembers to critical structural instabilities, but will have little impact to missions otherwise. Diagnostic information could then be transmitted to experienced technicians on the ground in a timely manner to determine whether pressure vessels have been impacted, are structurally unsound, or can be safely used to complete the mission.

  17. A review of some problems in global-local stress analysis

    NASA Technical Reports Server (NTRS)

    Nelson, Richard B.

    1989-01-01

    The various types of local-global finite-element problems point out the need to develop a new generation of software. First, this new software needs to have a complete analysis capability, encompassing linear and nonlinear analysis of 1-, 2-, and 3-dimensional finite-element models, as well as mixed dimensional models. The software must be capable of treating static and dynamic (vibration and transient response) problems, including the stability effects of initial stress, and the software should be able to treat both elastic and elasto-plastic materials. The software should carry a set of optional diagnostics to assist the program user during model generation in order to help avoid obvious structural modeling errors. In addition, the program software should be well documented so the user has a complete technical reference for each type of element contained in the program library, including information on such topics as the type of numerical integration, use of underintegration, and inclusion of incompatible modes, etc. Some packaged information should also be available to assist the user in building mixed-dimensional models. An important advancement in finite-element software should be in the development of program modularity, so that the user can select from a menu various basic operations in matrix structural analysis.

  18. Maintaining the Health of Software Monitors

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Rungta, Neha

    2013-01-01

    Software health management (SWHM) techniques complement the rigorous verification and validation processes that are applied to safety-critical systems prior to their deployment. These techniques are used to monitor deployed software in its execution environment, serving as the last line of defense against the effects of a critical fault. SWHM monitors use information from the specification and implementation of the monitored software to detect violations, predict possible failures, and help the system recover from faults. Changes to the monitored software, such as adding new functionality or fixing defects, therefore, have the potential to impact the correctness of both the monitored software and the SWHM monitor. In this work, we describe how the results of a software change impact analysis technique, Directed Incremental Symbolic Execution (DiSE), can be applied to monitored software to identify the potential impact of the changes on the SWHM monitor software. The results of DiSE can then be used by other analysis techniques, e.g., testing, debugging, to help preserve and improve the integrity of the SWHM monitor as the monitored software evolves.

  19. Taking the Observatory to the Astronomer

    NASA Astrophysics Data System (ADS)

    Bisque, T. M.

    1997-05-01

    Since 1992, Software Bisque's Remote Astronomy Software has been used by the Mt. Wilson Institute to allow interactive control of a 24" telescope and digital camera via modem. Software Bisque now introduces a comparable, relatively low-cost observatory system that allows powerful, yet "user-friendly" telescope and CCD camera control via the Internet. Utilizing software developed for the Windows 95/NT operating systems, the system offers point-and-click access to comprehensive celestial databases, extremely accurate telescope pointing, rapid download of digital CCD images by one or many users and flexible image processing software for data reduction and analysis. Our presentation will describe how the power of the personal computer has been leveraged to provide professional-level tools to the amateur astronomer, and include a description of this system's software and hardware components. The system software includes TheSky Astronomy Software?, CCDSoft CCD Astronomy Software?, TPoint Telescope Pointing Analysis System? software, Orchestrate? and, optionally, the RealSky CDs. The system hardware includes the Paramount GT-1100? Robotic Telescope Mount, as well as third party CCD cameras, focusers and optical tube assemblies.

  20. Development of automation software for neutron activation analysis process in Malaysian nuclear agency

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Rahman, N. A. A.; Ibrahim, M. M.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.

    2017-01-01

    Neutron Activation Analysis (NAA) process has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s. Most of the procedures established especially from sample registration to sample analysis are performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient. Hence, a software to support the system automation is developed to provide an effective method to replace redundant manual data entries and produce faster sample analysis and calculation process. This paper describes the design and development of automation software for NAA process which consists of three sub-programs. The sub-programs are sample registration, hardware control and data acquisition; and sample analysis. The data flow and connection between the sub-programs will be explained. The software is developed by using National Instrument LabView development package.

  1. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  2. Software and package applicating for network meta-analysis: A usage-based comparative study.

    PubMed

    Xu, Chang; Niu, Yuming; Wu, Junyi; Gu, Huiyun; Zhang, Chao

    2017-12-21

    To compare and analyze the characteristics and functions of software applications for network meta-analysis (NMA). PubMed, EMbase, The Cochrane Library, the official websites of Bayesian inference Using Gibbs Sampling (BUGS), Stata and R, and Google were searched to collect the software and packages for performing NMA; software and packages published up to March 2016 were included. After collecting the software, packages, and their user guides, we used the software and packages to calculate a typical example. All characteristics, functions, and computed results were compared and analyzed. Ten types of software were included, including programming and non-programming software. They were developed mainly based on Bayesian or frequentist theory. Most types of software have the characteristics of easy operation, easy mastery, exact calculation, or excellent graphing. However, there was no single software that performed accurate calculations with superior graphing; this could only be achieved through the combination of two or more types of software. This study suggests that the user should choose the appropriate software according to personal programming basis, operational habits, and financial ability. Then, the choice of the combination of BUGS and R (or Stata) software to perform the NMA is considered. © 2017 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  3. Toward Intelligent Software Defect Detection

    NASA Technical Reports Server (NTRS)

    Benson, Markland J.

    2011-01-01

    Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.

  4. Using Turnitin to Provide Feedback on L2 Writers' Texts

    ERIC Educational Resources Information Center

    Kostka, Ilka; Maliborska, Veronika

    2016-01-01

    Second language (L2) writing instructors have varying tools at their disposal for providing feedback on students' writing, including ones that enable them to provide written and audio feedback in electronic form. One tool that has been underexplored is Turnitin, a widely used software program that matches electronic text to a wide range of…

  5. Human vs. Computer Diagnosis of Students' Natural Selection Knowledge: Testing the Efficacy of Text Analytic Software

    ERIC Educational Resources Information Center

    Nehm, Ross H.; Haertig, Hendrik

    2012-01-01

    Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with…

  6. Structural Analysis Using NX Nastran 9.0

    NASA Technical Reports Server (NTRS)

    Rolewicz, Benjamin M.

    2014-01-01

    NX Nastran is a powerful Finite Element Analysis (FEA) software package used to solve linear and non-linear models for structural and thermal systems. The software, which consists of both a solver and user interface, breaks down analysis into four files, each of which are important to the end results of the analysis. The software offers capabilities for a variety of types of analysis, and also contains a respectable modeling program. Over the course of ten weeks, I was trained to effectively implement NX Nastran into structural analysis and refinement for parts of two missions at NASA's Kennedy Space Center, the Restore mission and the Orion mission.

  7. Development of Ada language control software for the NASA power management and distribution test bed

    NASA Technical Reports Server (NTRS)

    Wright, Ted; Mackin, Michael; Gantose, Dave

    1989-01-01

    The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on space station Freedom. It is designed to develop and test hardware and software for a 20-kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that uses the same commands as the operator interface is provided, encouraging interactive exploration of the system.

  8. A Virtual World of Visualization

    NASA Technical Reports Server (NTRS)

    1998-01-01

    In 1990, Sterling Software, Inc., developed the Flow Analysis Software Toolkit (FAST) for NASA Ames on contract. FAST is a workstation based modular analysis and visualization tool. It is used to visualize and animate grids and grid oriented data, typically generated by finite difference, finite element and other analytical methods. FAST is now available through COSMIC, NASA's software storehouse.

  9. A parallel and sensitive software tool for methylation analysis on multicore platforms.

    PubMed

    Tárraga, Joaquín; Pérez, Mariano; Orduña, Juan M; Duato, José; Medina, Ignacio; Dopazo, Joaquín

    2015-10-01

    DNA methylation analysis suffers from very long processing time, as the advent of Next-Generation Sequencers has shifted the bottleneck of genomic studies from the sequencers that obtain the DNA samples to the software that performs the analysis of these samples. The existing software for methylation analysis does not seem to scale efficiently neither with the size of the dataset nor with the length of the reads to be analyzed. As it is expected that the sequencers will provide longer and longer reads in the near future, efficient and scalable methylation software should be developed. We present a new software tool, called HPG-Methyl, which efficiently maps bisulphite sequencing reads on DNA, analyzing DNA methylation. The strategy used by this software consists of leveraging the speed of the Burrows-Wheeler Transform to map a large number of DNA fragments (reads) rapidly, as well as the accuracy of the Smith-Waterman algorithm, which is exclusively employed to deal with the most ambiguous and shortest reads. Experimental results on platforms with Intel multicore processors show that HPG-Methyl significantly outperforms in both execution time and sensitivity state-of-the-art software such as Bismark, BS-Seeker or BSMAP, particularly for long bisulphite reads. Software in the form of C libraries and functions, together with instructions to compile and execute this software. Available by sftp to anonymous@clariano.uv.es (password 'anonymous'). juan.orduna@uv.es or jdopazo@cipf.es. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Matlab-Excel Interface for OpenDSS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The software allows users of the OpenDSS grid modeling software to access their load flow models using a GUI interface developed in MATLAB. The circuit definitions are entered into a Microsoft Excel spreadsheet which makes circuit creation and editing a much simpler process than the basic text-based editors used in the native OpenDSS interface. Plot tools have been developed which can be accessed through a MATLAB GUI once the desired parameters have been simulated.

  11. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    NASA Astrophysics Data System (ADS)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  12. The integration of the risk management process with the lifecycle of medical device software.

    PubMed

    Pecoraro, F; Luzi, D

    2014-01-01

    The application of software in the Medical Device (MD) domain has become central to the improvement of diagnoses and treatments. The new European regulations that specifically address software as an important component of MD, require complex procedures to make software compliant with safety requirements, introducing thereby new challenges in the qualification and classification of MD software as well as in the performance of risk management activities. Under this perspective, the aim of this paper is to propose an integrated framework that combines the activities to be carried out by the manufacturer to develop safe software within the development lifecycle based on the regulatory requirements reported in US and European regulations as well as in the relevant standards and guidelines. A comparative analysis was carried out to identify the main issues related to the application of the current new regulations. In addition, standards and guidelines recently released to harmonise procedures for the validation of MD software have been used to define the risk management activities to be carried out by the manufacturer during the software development process. This paper highlights the main issues related to the qualification and classification of MD software, providing an analysis of the different regulations applied in Europe and the US. A model that integrates the risk management process within the software development lifecycle has been proposed too. It is based on regulatory requirements and considers software risk analysis as a central input to be managed by the manufacturer already at the initial stages of the software design, in order to prevent MD failures. Relevant changes in the process of MD development have been introduced with the recognition of software being an important component of MDs as stated in regulations and standards. This implies the performance of highly iterative processes that have to integrate the risk management in the framework of software development. It also makes it necessary to involve both medical and software engineering competences to safeguard patient and user safety.

  13. Prony Ringdown GUI (CERTS Prony Ringdown, part of the DSI Tool Box)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuffner, Francis; Marinovici, PNNL Laurentiu; Hauer, PNNL John

    2014-02-21

    The PNNL Prony Ringdown graphical user interface is one analysis tool included in the Dynamic System Identification toolbox (DSI Toolbox). The Dynamic System Identification toolbox is a MATLAB-based collection of tools for parsing and analyzing phasor measurement unit data, especially in regards to small signal stability. It includes tools to read the data, preprocess it, and perform small signal analysis. 5. Method of Solution: The Dynamic System Identification Toolbox (DSI Toolbox) is designed to provide a research environment for examining phasor measurement unit data and performing small signal stability analysis. The software uses a series of text-driven menus to helpmore » guide users and organize the toolbox features. Methods for reading in populate phasor measurement unit data are provided, with appropriate preprocessing options for small-signal-stability analysis. The toolbox includes the Prony Ringdown GUI and basic algorithms to estimate information on oscillatory modes of the system, such as modal frequency and damping ratio.« less

  14. WormQTL—public archive and analysis web portal for natural variation data in Caenorhabditis spp

    PubMed Central

    Snoek, L. Basten; Van der Velde, K. Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O.; Poulin, Gino B.; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C.; Kammenga, Jan E.; Swertz, Morris A.

    2013-01-01

    Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype–phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers. PMID:23180786

  15. WormQTL--public archive and analysis web portal for natural variation data in Caenorhabditis spp.

    PubMed

    Snoek, L Basten; Van der Velde, K Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O; Poulin, Gino B; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C; Kammenga, Jan E; Swertz, Morris A

    2013-01-01

    Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype-phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers.

  16. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  17. Advanced Automation for Ion Trap Mass Spectrometry-New Opportunities for Real-Time Autonomous Analysis

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, C. M.; Salmonson, J. D.; Yost, R. A.; Griffin, T. P.; Yates, N. A.; Lawless, James G. (Technical Monitor)

    1994-01-01

    The utility of MS/MS for both target compound analysis and the structure elucidation of unknowns has been described in a number of references. A broader acceptance of this technique has not yet been realized as it requires large, complex, and costly instrumentation which has not been competitive with more conventional techniques. Recent advancements in ion trap mass spectrometry promise to change this situation. Although the ion trap's small size, sensitivity, and ability to perform multiple stages of mass spectrometry have made it eminently suitable for on-line, real-time monitoring applications, advance automation techniques are required to make these capabilities more accessible to non-experts. Towards this end we have developed custom software for the design and implementation of MS/MS experiments. This software allows the user to take full advantage of the ion trap's versatility with respect to ionization techniques, scan proxies, and ion accumulation/ejection methods. Additionally, expert system software has been developed for autonomous target compound analysis. This software has been linked to ion trap control software and a commercial data system to bring all of the steps in the analysis cycle under control of the expert system. These software development efforts and their utilization for a number of trace analysis applications will be described.

  18. Introducing a New Software for Geodetic Analysis

    NASA Astrophysics Data System (ADS)

    Hjelle, G. A.; Dähnn, M.; Fausk, I.; Kirkvik, A. S.; Mysen, E.

    2016-12-01

    At the Norwegian Mapping Authority, we are currently developing Where, a newsoftware for geodetic analysis. Where is built on our experiences with theGeosat software, and will be able to analyse and combine data from VLBI, SLR,GNSS and DORIS. The software is mainly written in Python which has proved veryfruitful. The code is quick to write and the architecture is easily extendableand maintainable. The Python community provides a rich eco-system of tools fordoing data-analysis, including effective data storage and powerfulvisualization. Python interfaces well with other languages so that we can easilyreuse existing, well-tested code like the SOFA and IERS libraries. This presentation will show some of the current capabilities of Where,including benchmarks against other software packages. In addition we will reporton some simple investigations we have done using the software, and outline ourplans for further progress.

  19. Managing biological networks by using text mining and computer-aided curation

    NASA Astrophysics Data System (ADS)

    Yu, Seok Jong; Cho, Yongseong; Lee, Min-Ho; Lim, Jongtae; Yoo, Jaesoo

    2015-11-01

    In order to understand a biological mechanism in a cell, a researcher should collect a huge number of protein interactions with experimental data from experiments and the literature. Text mining systems that extract biological interactions from papers have been used to construct biological networks for a few decades. Even though the text mining of literature is necessary to construct a biological network, few systems with a text mining tool are available for biologists who want to construct their own biological networks. We have developed a biological network construction system called BioKnowledge Viewer that can generate a biological interaction network by using a text mining tool and biological taggers. It also Boolean simulation software to provide a biological modeling system to simulate the model that is made with the text mining tool. A user can download PubMed articles and construct a biological network by using the Multi-level Knowledge Emergence Model (KMEM), MetaMap, and A Biomedical Named Entity Recognizer (ABNER) as a text mining tool. To evaluate the system, we constructed an aging-related biological network that consist 9,415 nodes (genes) by using manual curation. With network analysis, we found that several genes, including JNK, AP-1, and BCL-2, were highly related in aging biological network. We provide a semi-automatic curation environment so that users can obtain a graph database for managing text mining results that are generated in the server system and can navigate the network with BioKnowledge Viewer, which is freely available at http://bioknowledgeviewer.kisti.re.kr.

  20. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  1. NASA software specification and evaluation system design, part 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.

  2. Quality assurance software inspections at NASA Ames: Metrics for feedback and modification

    NASA Technical Reports Server (NTRS)

    Wenneson, G.

    1985-01-01

    Software inspections are a set of formal technical review procedures held at selected key points during software development in order to find defects in software documents--is described in terms of history, participants, tools, procedures, statistics, and database analysis.

  3. Analysis-Software for Hyperspectral Algal Reflectance Probes v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timlin, Jerilyn A.; Reichardt, Thomas A.; Jenson, Travis J.

    This software provides onsite analysis of the hyperspectral reflectance data acquired on an outdoor algal pond by a multichannel, fiber-coupled spectroradiometer. The analysis algorithm is based on numerical inversion of a reflectance model, in which the above-water reflectance is expressed as a function of the single backscattering albedo, which is dependent on the backscatter and absorption coefficients of the algal culture, which are in turn related to the algal biomass and pigment optical activity, respectively. Prior to the development of this software, while raw multichannel data were displayed in real time, analysis required a post-processing procedure to extract the relevantmore » parameters. This software provides the capability to track the temporal variation of such culture parameters in real time, as raw data are being acquired, or can be run in a post processing mode. The software allows the user to select between different algal species, incorporate the appropriate calibration data, and observe the quality of the resulting model inversions.« less

  4. Use of speech-to-text technology for documentation by healthcare providers.

    PubMed

    Ajami, Sima

    2016-01-01

    Medical records are a critical component of a patient's treatment. However, documentation of patient-related information is considered a secondary activity in the provision of healthcare services, often leading to incomplete medical records and patient data of low quality. Advances in information technology (IT) in the health system and registration of information in electronic health records (EHR) using speechto- text conversion software have facilitated service delivery. This narrative review is a literature search with the help of libraries, books, conference proceedings, databases of Science Direct, PubMed, Proquest, Springer, SID (Scientific Information Database), and search engines such as Yahoo, and Google. I used the following keywords and their combinations: speech recognition, automatic report documentation, voice to text software, healthcare, information, and voice recognition. Due to lack of knowledge of other languages, I searched all texts in English or Persian with no time limits. Of a total of 70, only 42 articles were selected. Speech-to-text conversion technology offers opportunities to improve the documentation process of medical records, reduce cost and time of recording information, enhance the quality of documentation, improve the quality of services provided to patients, and support healthcare providers in legal matters. Healthcare providers should recognize the impact of this technology on service delivery.

  5. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  6. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  7. Analysis of key technologies for virtual instruments metrology

    NASA Astrophysics Data System (ADS)

    Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang

    2008-12-01

    Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.

  8. Statistical software applications used in health services research: analysis of published studies in the U.S

    PubMed Central

    2011-01-01

    Background This study aims to identify the statistical software applications most commonly employed for data analysis in health services research (HSR) studies in the U.S. The study also examines the extent to which information describing the specific analytical software utilized is provided in published articles reporting on HSR studies. Methods Data were extracted from a sample of 1,139 articles (including 877 original research articles) published between 2007 and 2009 in three U.S. HSR journals, that were considered to be representative of the field based upon a set of selection criteria. Descriptive analyses were conducted to categorize patterns in statistical software usage in those articles. The data were stratified by calendar year to detect trends in software use over time. Results Only 61.0% of original research articles in prominent U.S. HSR journals identified the particular type of statistical software application used for data analysis. Stata and SAS were overwhelmingly the most commonly used software applications employed (in 46.0% and 42.6% of articles respectively). However, SAS use grew considerably during the study period compared to other applications. Stratification of the data revealed that the type of statistical software used varied considerably by whether authors were from the U.S. or from other countries. Conclusions The findings highlight a need for HSR investigators to identify more consistently the specific analytical software used in their studies. Knowing that information can be important, because different software packages might produce varying results, owing to differences in the software's underlying estimation methods. PMID:21977990

  9. A Quantitative Analysis of Open Source Software's Acceptability as Production-Quality Code

    ERIC Educational Resources Information Center

    Fischer, Michael

    2011-01-01

    The difficulty in writing defect-free software has been long acknowledged both by academia and industry. A constant battle occurs as developers seek to craft software that works within aggressive business schedules and deadlines. Many tools and techniques are used in attempt to manage these software projects. Software metrics are a tool that has…

  10. A mHealth Application for Chronic Wound Care: Findings of a User Trial

    PubMed Central

    Friesen, Marcia R.; Hamel, Carole; McLeod, Robert D.

    2013-01-01

    This paper reports on the findings of a user trial of a mHealth application for pressure ulcer (bedsore) documentation. Pressure ulcers are a leading iatrogenic cause of death in developed countries and significantly impact quality of life for those affected. Pressure ulcers will be an increasing public health concern as the population ages. Electronic information systems are being explored to improve consistency and accuracy of documentation, improve patient and caregiver experience and ultimately improve patient outcomes. A software application was developed for Android Smartphones and tablets and was trialed in a personal care home in Western Canada. The software application provides an electronic medical record for chronic wounds, replacing nurses’ paper-based charting and is positioned for integration with facility’s larger eHealth framework. The mHealth application offers three intended benefits over paper-based charting of chronic wounds, including: (1) the capacity for remote consultation (telehealth between facilities, practitioners, and/or remote communities), (2) data organization and analysis, including built-in alerts, automatically-generated text-based and graph-based wound histories including wound images, and (3) tutorial support for non-specialized caregivers. The user trial yielded insights regarding the software application’s design and functionality in the clinical setting, and highlighted the key role of wound photographs in enhancing patient and caregiver experiences, enhancing communication between multiple healthcare professionals, and leveraging the software’s telehealth capacities. PMID:24256739

  11. Pulser: user-friendly, graphical user-interface based software for controlling stimuli during data acquisition with Spike2 for Windows.

    PubMed

    Lidierth, Malcolm

    2005-02-15

    This paper describes software that runs in the Spike2 for Windows environment and provides a versatile tool for generating stimuli during data acquisition from the 1401 family of interfaces (CED, UK). A graphical user interface (GUI) is used to provide dynamic control of stimulus timing. Both single stimuli and trains of stimuli can be generated. The pulse generation routines make use of programmable variables within the interface and allow these to be rapidly changed during an experiment. The routines therefore provide the ease-of-use associated with external, stand-alone pulse generators. Complex stimulus protocols can be loaded from an external text file and facilities are included to create these files through the GUI. The software consists of a Spike2 script that runs in the host PC, and accompanying routines written in the 1401 sequencer control code, that run in the 1401 interface. Handshaking between the PC and the interface card are built into the routines and provides for full integration of sampling, analysis and stimulus generation during an experiment. Control of the 1401 digital-to-analogue converters is also provided; this allows control of stimulus amplitude as well as timing and also provides a sample-hold feature that may be used to remove DC offsets and drift from recorded data.

  12. A Java-based tool for the design of classification microarrays.

    PubMed

    Meng, Da; Broschat, Shira L; Call, Douglas R

    2008-08-04

    Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for analysis of subsequent experimental data. Additionally, PLASMID can be used to construct virtual microarrays with genomes from public databases, which can then be used to identify an optimal set of probes.

  13. MaRiMba: a software application for spectral library-based MRM transition list assembly.

    PubMed

    Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Peterson, Amelia; Eng, Jimmy K; Shteynberg, David; Mendoza, Luis; Deutsch, Eric W; Risler, Jenni; Tasman, Natalie; Aebersold, Ruedi; Lam, Henry; Martin, Daniel B

    2009-10-01

    Multiple reaction monitoring mass spectrometry (MRM-MS) is a targeted analysis method that has been increasingly viewed as an avenue to explore proteomes with unprecedented sensitivity and throughput. We have developed a software tool, called MaRiMba, to automate the creation of explicitly defined MRM transition lists required to program triple quadrupole mass spectrometers in such analyses. MaRiMba creates MRM transition lists from downloaded or custom-built spectral libraries, restricts output to specified proteins or peptides, and filters based on precursor peptide and product ion properties. MaRiMba can also create MRM lists containing corresponding transitions for isotopically heavy peptides, for which the precursor and product ions are adjusted according to user specifications. This open-source application is operated through a graphical user interface incorporated into the Trans-Proteomic Pipeline, and it outputs the final MRM list to a text file for upload to MS instruments. To illustrate the use of MaRiMba, we used the tool to design and execute an MRM-MS experiment in which we targeted the proteins of a well-defined and previously published standard mixture.

  14. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  15. BamTools: a C++ API and toolkit for analyzing and managing BAM files

    PubMed Central

    Barnett, Derek W.; Garrison, Erik K.; Quinlan, Aaron R.; Strömberg, Michael P.; Marth, Gabor T.

    2011-01-01

    Motivation: Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. Results: We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. Availability: BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools. Contact: barnetde@bc.edu PMID:21493652

  16. Seamless presentation capture, indexing, and management

    NASA Astrophysics Data System (ADS)

    Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel

    2005-10-01

    Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.

  17. Software Assurance: Five Essential Considerations for Acquisition Officials

    DTIC Science & Technology

    2007-05-01

    May 2007 www.stsc.hill.af.mil 17 2 • address security concerns in the software development life cycle ( SDLC )? • Are there formal software quality...What threat modeling process, if any, is used when designing the software ? What analysis, design, and construction tools are used by your software design...the-shelf (COTS), government off-the-shelf (GOTS), open- source, embedded, and legacy software . Attackers exploit unintentional vulnerabil- ities or

  18. The Missing Link: The Use of Link Words and Phrases as a Link to Manuscript Quality

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.

    2016-01-01

    In this article, I provide a typology of transition words/phrases. This typology comprises 12 dimensions of link words/phrases that capture 277 link words/phrases. Using QDA Miner, WordStat, and SPSS--a computer-assisted mixed methods data analysis software, content analysis software, and statistical software, respectively--I analyzed 74…

  19. An overview of STRUCTURE: applications, parameter settings, and supporting software

    PubMed Central

    Porras-Hurtado, Liliana; Ruiz, Yarimar; Santos, Carla; Phillips, Christopher; Carracedo, Ángel; Lareu, Maria V.

    2013-01-01

    Objectives: We present an up-to-date review of STRUCTURE software: one of the most widely used population analysis tools that allows researchers to assess patterns of genetic structure in a set of samples. STRUCTURE can identify subsets of the whole sample by detecting allele frequency differences within the data and can assign individuals to those sub-populations based on analysis of likelihoods. The review covers STRUCTURE's most commonly used ancestry and frequency models, plus an overview of the main applications of the software in human genetics including case-control association studies (CCAS), population genetics, and forensic analysis. The review is accompanied by supplementary material providing a step-by-step guide to running STRUCTURE. Methods: With reference to a worked example, we explore the effects of changing the principal analysis parameters on STRUCTURE results when analyzing a uniform set of human genetic data. Use of the supporting software: CLUMPP and distruct is detailed and we provide an overview and worked example of STRAT software, applicable to CCAS. Conclusion: The guide offers a simplified view of how STRUCTURE, CLUMPP, distruct, and STRAT can be applied to provide researchers with an informed choice of parameter settings and supporting software when analyzing their own genetic data. PMID:23755071

  20. Teaching Confirmatory Factor Analysis to Non-Statisticians: A Case Study for Estimating Composite Reliability of Psychometric Instruments

    PubMed Central

    Gajewski, Byron J.; Jiang, Yu; Yeh, Hung-Wen; Engelman, Kimberly; Teel, Cynthia; Choi, Won S.; Greiner, K. Allen; Daley, Christine Makosky

    2013-01-01

    Texts and software that we are currently using for teaching multivariate analysis to non-statisticians lack in the delivery of confirmatory factor analysis (CFA). The purpose of this paper is to provide educators with a complement to these resources that includes CFA and its computation. We focus on how to use CFA to estimate a “composite reliability” of a psychometric instrument. This paper provides guidance for introducing, via a case-study, the non-statistician to CFA. As a complement to our instruction about the more traditional SPSS, we successfully piloted the software R for estimating CFA on nine non-statisticians. This approach can be used with healthcare graduate students taking a multivariate course, as well as modified for community stakeholders of our Center for American Indian Community Health (e.g. community advisory boards, summer interns, & research team members). The placement of CFA at the end of the class is strategic and gives us an opportunity to do some innovative teaching: (1) build ideas for understanding the case study using previous course work (such as ANOVA); (2) incorporate multi-dimensional scaling (that students already learned) into the selection of a factor structure (new concept); (3) use interactive data from the students (active learning); (4) review matrix algebra and its importance to psychometric evaluation; (5) show students how to do the calculation on their own; and (6) give students access to an actual recent research project. PMID:24772373

Top