Sample records for statistical software package

  1. A comparison of InVivoStat with other statistical software packages for analysis of data generated from animal experiments.

    PubMed

    Clark, Robin A; Shoaib, Mohammed; Hewitt, Katherine N; Stanford, S Clare; Bate, Simon T

    2012-08-01

    InVivoStat is a free-to-use statistical software package for analysis of data generated from animal experiments. The package is designed specifically for researchers in the behavioural sciences, where exploiting the experimental design is crucial for reliable statistical analyses. This paper compares the analysis of three experiments conducted using InVivoStat with other widely used statistical packages: SPSS (V19), PRISM (V5), UniStat (V5.6) and Statistica (V9). We show that InVivoStat provides results that are similar to those from the other packages and, in some cases, are more advanced. This investigation provides evidence of further validation of InVivoStat and should strengthen users' confidence in this new software package.

  2. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  3. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  4. INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT

    EPA Science Inventory

    SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....

  5. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  6. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  7. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Mathematical and Statistical Software Index. Final Report.

    ERIC Educational Resources Information Center

    Black, Doris E., Comp.

    Brief descriptions are provided of general-purpose mathematical and statistical software, including 27 "stand-alone" programs, three subroutine systems, and two nationally recognized statistical packages, which are available in the Air Force Human Resources Laboratory (AFHRL) software library. This index was created to enable researchers…

  9. Interactive Visualization of Assessment Data: The Software Package Mondrian

    ERIC Educational Resources Information Center

    Unlu, Ali; Sargin, Anatol

    2009-01-01

    Mondrian is state-of-the-art statistical data visualization software featuring modern interactive visualization techniques for a wide range of data types. This article reviews the capabilities, functionality, and interactive properties of this software package. Key features of Mondrian are illustrated with data from the Programme for International…

  10. User’s guide for GcClust—An R package for clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.

    2016-04-08

    GcClust is a software package developed by the U.S. Geological Survey for statistical clustering of regional geochemical data, and similar data such as regional mineralogical data. Functions within the software package are written in the R statistical programming language. These functions, their documentation, and a copy of the user’s guide are bundled together in R’s unit of sharable code, which is called a “package.” The user’s guide includes step-by-step instructions showing how the functions are used to cluster data and to evaluate the clustering results. These functions are demonstrated in this report using test data, which are included in the package.

  11. Network Meta-Analysis Using R: A Review of Currently Available Automated Packages

    PubMed Central

    Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph

    2014-01-01

    Network meta-analysis (NMA) – a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously – has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA. PMID:25541687

  12. Network meta-analysis using R: a review of currently available automated packages.

    PubMed

    Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph

    2014-01-01

    Network meta-analysis (NMA)--a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously--has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA.

  13. Quick Overview Scout 2008 Version 1.0

    EPA Science Inventory

    The Scout 2008 version 1.0 statistical software package has been updated from past DOS and Windows versions to provide classical and robust univariate and multivariate graphical and statistical methods that are not typically available in commercial or freeware statistical softwar...

  14. Scout 2008 Version 1.0 User Guide

    EPA Science Inventory

    The Scout 2008 version 1.0 software package provides a wide variety of classical and robust statistical methods that are not typically available in other commercial software packages. A major part of Scout deals with classical, robust, and resistant univariate and multivariate ou...

  15. Report: Scientific Software.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1985-01-01

    Discusses various aspects of scientific software, including evaluation and selection of commercial software products; program exchanges, catalogs, and other information sources; major data analysis packages; statistics and chemometrics software; and artificial intelligence. (JN)

  16. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  17. Using R in Introductory Statistics Courses with the pmg Graphical User Interface

    ERIC Educational Resources Information Center

    Verzani, John

    2008-01-01

    The pmg add-on package for the open source statistics software R is described. This package provides a simple to use graphical user interface (GUI) that allows introductory statistics students, without advanced computing skills, to quickly create the graphical and numeric summaries expected of them. (Contains 9 figures.)

  18. Evaluation of Solid Rocket Motor Component Data Using a Commercially Available Statistical Software Package

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2015-01-01

    Commercially available software packages today allow users to quickly perform the routine evaluations of (1) descriptive statistics to numerically and graphically summarize both sample and population data, (2) inferential statistics that draws conclusions about a given population from samples taken of it, (3) probability determinations that can be used to generate estimates of reliability allowables, and finally (4) the setup of designed experiments and analysis of their data to identify significant material and process characteristics for application in both product manufacturing and performance enhancement. This paper presents examples of analysis and experimental design work that has been conducted using Statgraphics®(Registered Trademark) statistical software to obtain useful information with regard to solid rocket motor propellants and internal insulation material. Data were obtained from a number of programs (Shuttle, Constellation, and Space Launch System) and sources that include solid propellant burn rate strands, tensile specimens, sub-scale test motors, full-scale operational motors, rubber insulation specimens, and sub-scale rubber insulation analog samples. Besides facilitating the experimental design process to yield meaningful results, statistical software has demonstrated its ability to quickly perform complex data analyses and yield significant findings that might otherwise have gone unnoticed. One caveat to these successes is that useful results not only derive from the inherent power of the software package, but also from the skill and understanding of the data analyst.

  19. Comparison of four software packages for CT lung volumetry in healthy individuals.

    PubMed

    Nemec, Stefan F; Molinari, Francesco; Dufresne, Valerie; Gosset, Natacha; Silva, Mario; Bankier, Alexander A

    2015-06-01

    To compare CT lung volumetry (CTLV) measurements provided by different software packages, and to provide normative data for lung densitometric measurements in healthy individuals. This retrospective study included 51 chest CTs of 17 volunteers (eight men and nine women; mean age, 30 ± 6 years), who underwent spirometrically monitored CT at total lung capacity (TLC), functional residual capacity (FRC), and mean inspiratory capacity (MIC). Volumetric differences assessed by four commercial software packages were compared with analysis of variance (ANOVA) for repeated measurements and benchmarked against the threshold for acceptable variability between spirometric measurements. Mean lung density (MLD) and parenchymal heterogeneity (MLD-SD) were also compared with ANOVA. Volumetric differences ranged from 12 to 213 ml (0.20 % to 6.45 %). Although 16/18 comparisons (among four software packages at TLC, MIC, and FRC) were statistically significant (P < 0.001 to P = 0.004), only 3/18 comparisons, one at MIC and two at FRC, exceeded the spirometry variability threshold. MLD and MLD-SD significantly increased with decreasing volumes, and were significantly larger in lower compared to upper lobes (P < 0.001). Lung volumetric differences provided by different software packages are small. These differences should not be interpreted based on statistical significance alone, but together with absolute volumetric differences. • Volumetric differences, assessed by different CTLV software, are small but statistically significant. • Volumetric differences are smaller at TLC than at MIC and FRC. • Volumetric differences rarely exceed spirometric repeatability thresholds at MIC and FRC. • Differences between CTLV measurements should be interpreted based on comparison of absolute differences. • MLD increases with decreasing volumes, and is larger in lower compared to upper lobes.

  20. Integrating Statistical Visualization Research into the Political Science Classroom

    ERIC Educational Resources Information Center

    Draper, Geoffrey M.; Liu, Baodong; Riesenfeld, Richard F.

    2011-01-01

    The use of computer software to facilitate learning in political science courses is well established. However, the statistical software packages used in many political science courses can be difficult to use and counter-intuitive. We describe the results of a preliminary user study suggesting that visually-oriented analysis software can help…

  1. Orchestrating high-throughput genomic analysis with Bioconductor

    PubMed Central

    Huber, Wolfgang; Carey, Vincent J.; Gentleman, Robert; Anders, Simon; Carlson, Marc; Carvalho, Benilton S.; Bravo, Hector Corrada; Davis, Sean; Gatto, Laurent; Girke, Thomas; Gottardo, Raphael; Hahne, Florian; Hansen, Kasper D.; Irizarry, Rafael A.; Lawrence, Michael; Love, Michael I.; MacDonald, James; Obenchain, Valerie; Oleś, Andrzej K.; Pagès, Hervé; Reyes, Alejandro; Shannon, Paul; Smyth, Gordon K.; Tenenbaum, Dan; Waldron, Levi; Morgan, Martin

    2015-01-01

    Bioconductor is an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology. The project aims to enable interdisciplinary research, collaboration and rapid development of scientific software. Based on the statistical programming language R, Bioconductor comprises 934 interoperable packages contributed by a large, diverse community of scientists. Packages cover a range of bioinformatic and statistical applications. They undergo formal initial review and continuous automated testing. We present an overview for prospective users and contributors. PMID:25633503

  2. Prototyping with Data Dictionaries for Requirements Analysis.

    DTIC Science & Technology

    1985-03-01

    statistical packages and software for screen layout. These items work at a higher level than another category of prototyping tool, program generators... Program generators are software packages which, when given specifications, produce source listings, usually in a high order language such as COBCL...with users and this will not happen if he must stop to develcp a detailed program . [Ref. 241] Hardware as well as software should be considered in

  3. Attitudes and Achievement in Introductory Psychological Statistics Classes: Traditional versus Computer-Supported Instruction.

    ERIC Educational Resources Information Center

    Gratz, Zandra S.; And Others

    A study was conducted at a large, state-supported college in the Northeast to establish a mechanism by which a popular software package, Statistical Package for the Social Sciences (SPSS), could be used in psychology program statistics courses in such a way that no prior computer expertise would be needed on the part of the faculty or the…

  4. User’s guide for MapMark4—An R package for the probability calculations in three-part mineral resource assessments

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2017-06-27

    MapMark4 is a software package that implements the probability calculations in three-part mineral resource assessments. Functions within the software package are written in the R statistical programming language. These functions, their documentation, and a copy of this user’s guide are bundled together in R’s unit of shareable code, which is called a “package.” This user’s guide includes step-by-step instructions showing how the functions are used to carry out the probability calculations. The calculations are demonstrated using test data, which are included in the package.

  5. GenomeGraphs: integrated genomic data visualization with R.

    PubMed

    Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine

    2009-01-06

    Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1988

    1988-01-01

    Presents reviews of six software packages. Includes (1) "Plain Vanilla Statistics"; (2) "MathCAD 2.0"; (3) "GrFx"; (4) "Trigonometry"; (5) "Algebra II"; (6) "Algebra Drill and Practice I, II, and III." (PK)

  7. Advanced statistical methods for improved data analysis of NASA astrophysics missions

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.

    1992-01-01

    The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.

  8. ProUCL version 4.1.00 Documentation Downloads

    EPA Pesticide Factsheets

    ProUCL version 4.1.00 represents a comprehensive statistical software package equipped with statistical methods and graphical tools needed to address many environmental sampling and statistical issues as described in various these guidance documents.

  9. Software for the Integration of Multiomics Experiments in Bioconductor.

    PubMed

    Ramos, Marcel; Schiffer, Lucas; Re, Angela; Azhar, Rimsha; Basunia, Azfar; Rodriguez, Carmen; Chan, Tiffany; Chapman, Phil; Davis, Sean R; Gomez-Cabrero, David; Culhane, Aedin C; Haibe-Kains, Benjamin; Hansen, Kasper D; Kodali, Hanish; Louis, Marie S; Mer, Arvind S; Riester, Markus; Morgan, Martin; Carey, Vince; Waldron, Levi

    2017-11-01

    Multiomics experiments are increasingly commonplace in biomedical research and add layers of complexity to experimental design, data integration, and analysis. R and Bioconductor provide a generic framework for statistical analysis and visualization, as well as specialized data classes for a variety of high-throughput data types, but methods are lacking for integrative analysis of multiomics experiments. The MultiAssayExperiment software package, implemented in R and leveraging Bioconductor software and design principles, provides for the coordinated representation of, storage of, and operation on multiple diverse genomics data. We provide the unrestricted multiple 'omics data for each cancer tissue in The Cancer Genome Atlas as ready-to-analyze MultiAssayExperiment objects and demonstrate in these and other datasets how the software simplifies data representation, statistical analysis, and visualization. The MultiAssayExperiment Bioconductor package reduces major obstacles to efficient, scalable, and reproducible statistical analysis of multiomics data and enhances data science applications of multiple omics datasets. Cancer Res; 77(21); e39-42. ©2017 AACR . ©2017 American Association for Cancer Research.

  10. Powerlaw: a Python package for analysis of heavy-tailed distributions.

    PubMed

    Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar

    2014-01-01

    Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible.

  11. Environmental statistics with S-Plus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millard, S.P.; Neerchal, N.K.

    1999-12-01

    The combination of easy-to-use software with easy access to a description of the statistical methods (definitions, concepts, etc.) makes this book an excellent resource. One of the major features of this book is the inclusion of general information on environmental statistical methods and examples of how to implement these methods using the statistical software package S-Plus and the add-in modules Environmental-Stats for S-Plus, S+SpatialStats, and S-Plus for ArcView.

  12. Analysis of half diallel mating designs I: a practical analysis procedure for ANOVA approximation.

    Treesearch

    G.R. Johnson; J.N. King

    1998-01-01

    Procedures to analyze half-diallel mating designs using the SAS statistical package are presented. The procedure requires two runs of PROC and VARCOMP and results in estimates of additive and non-additive genetic variation. The procedures described can be modified to work on most statistical software packages which can compute variance component estimates. The...

  13. Quality Assurance Information for R Packages "aqfig" and "M3"

    EPA Science Inventory

    R packages “aqfig" and “M3" are optional modules for use with R statistical software (http://www.r-project.org). Package “aqfig" contains functions to aid users in the preparation of publication-quality figures for the display of air quality and other environmental data (e.g., le...

  14. Vertical bone measurements from cone beam computed tomography images using different software packages.

    PubMed

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz

    2015-01-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

  15. Computing and software

    USGS Publications Warehouse

    White, Gary C.; Hines, J.E.

    2004-01-01

    The reality is that the statistical methods used for analysis of data depend upon the availability of software. Analysis of marked animal data is no different than the rest of the statistical field. The methods used for analysis are those that are available in reliable software packages. Thus, the critical importance of having reliable, up–to–date software available to biologists is obvious. Statisticians have continued to develop more robust models, ever expanding the suite of potential analysis methodsavailable. But without software to implement these newer methods, they will languish in the abstract, and not be applied to the problems deserving them.In the Computers and Software Session, two new software packages are described, a comparison of implementation of methods for the estimation of nest survival is provided, and a more speculative paper about how the next generation of software might be structured is presented.Rotella et al. (2004) compare nest survival estimation with different software packages: SAS logistic regression, SAS non–linear mixed models, and Program MARK. Nests are assumed to be visited at various, possibly infrequent, intervals. All of the approaches described compute nest survival with the same likelihood, and require that the age of the nest is known to account for nests that eventually hatch. However, each approach offers advantages and disadvantages, explored by Rotella et al. (2004).Efford et al. (2004) present a new software package called DENSITY. The package computes population abundance and density from trapping arrays and other detection methods with a new and unique approach. DENSITY represents the first major addition to the analysis of trapping arrays in 20 years.Barker & White (2004) discuss how existing software such as Program MARK require that each new model’s likelihood must be programmed specifically for that model. They wishfully think that future software might allow the user to combine pieces of likelihood functions together to generate estimates. The idea is interesting, and maybe some bright young statistician can work out the specifics to implement the procedure.Choquet et al. (2004) describe MSURGE, a software package that implements the multistate capture–recapture models. The unique feature of MSURGE is that the design matrix is constructed with an interpreted language called GEMACO. Because MSURGE is limited to just multistate models, the special requirements of these likelihoods can be provided.The software and methods presented in these papers gives biologists and wildlife managers an expanding range of possibilities for data analysis. Although ease–of–use is generally getting better, it does not replace the need for understanding of the requirements and structure of the models being computed. The internet provides access to many free software packages as well as user–discussion groups to share knowledge and ideas. (A starting point for wildlife–related applications is (http://www.phidot.org).

  16. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research.

    PubMed

    Tang, Qi-Yi; Zhang, Chuan-Xi

    2013-04-01

    A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.

  17. Comparison of requirements and capabilities of major multipurpose software packages.

    PubMed

    Igo, Robert P; Schnell, Audrey H

    2012-01-01

    The aim of this chapter is to introduce the reader to commonly used software packages and illustrate their input requirements, analysis options, strengths, and limitations. We focus on packages that perform more than one function and include a program for quality control, linkage, and association analyses. Additional inclusion criteria were (1) programs that are free to academic users and (2) currently supported, maintained, and developed. Using those criteria, we chose to review three programs: Statistical Analysis for Genetic Epidemiology (S.A.G.E.), PLINK, and Merlin. We will describe the required input format and analysis options. We will not go into detail about every possible program in the packages, but we will give an overview of the packages requirements and capabilities.

  18. Company's Unusual Plan to Package Commercial Software with Business Textbooks Produces a Measure of Success.

    ERIC Educational Resources Information Center

    Watkins, Beverly T.

    1992-01-01

    Course Technology Inc. has developed 10 products combining textbooks with commercial software for college accounting, business, computer science, and statistics courses. Five of the products use Lotus 1-2-3 spreadsheet software. The products have been positively received by teachers and students. (DB)

  19. MORTICIA, a statistical analysis software package for determining optical surveillance system effectiveness.

    NASA Astrophysics Data System (ADS)

    Ramkilowan, A.; Griffith, D. J.

    2017-10-01

    Surveillance modelling in terms of the standard Detect, Recognise and Identify (DRI) thresholds remains a key requirement for determining the effectiveness of surveillance sensors. With readily available computational resources it has become feasible to perform statistically representative evaluations of the effectiveness of these sensors. A new capability for performing this Monte-Carlo type analysis is demonstrated in the MORTICIA (Monte- Carlo Optical Rendering for Theatre Investigations of Capability under the Influence of the Atmosphere) software package developed at the Council for Scientific and Industrial Research (CSIR). This first generation, python-based open-source integrated software package, currently in the alpha stage of development aims to provide all the functionality required to perform statistical investigations of the effectiveness of optical surveillance systems in specific or generic deployment theatres. This includes modelling of the mathematical and physical processes that govern amongst other components of a surveillance system; a sensor's detector and optical components, a target and its background as well as the intervening atmospheric influences. In this paper we discuss integral aspects of the bespoke framework that are critical to the longevity of all subsequent modelling efforts. Additionally, some preliminary results are presented.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    P-Mart was designed specifically to allow cancer researchers to perform robust statistical processing of publicly available cancer proteomic datasets. To date an online statistical processing suite for proteomics does not exist. The P-Mart software is designed to allow statistical programmers to utilize these algorithms through packages in the R programming language as well as offering a web-based interface using the Azure cloud technology. The Azure cloud technology also allows the release of the software via Docker containers.

  1. New generation of exploration tools: interactive modeling software and microcomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krajewski, S.A.

    1986-08-01

    Software packages offering interactive modeling techniques are now available for use on microcomputer hardware systems. These packages are reasonably priced for both company and independent explorationists; they do not require users to have high levels of computer literacy; they are capable of rapidly completing complex ranges of sophisticated geologic and geophysical modeling tasks; and they can produce presentation-quality output for comparison with real-world data. For example, interactive packages are available for mapping, log analysis, seismic modeling, reservoir studies, and financial projects as well as for applying a variety of statistical and geostatistical techniques to analysis of exploration data. More importantly,more » these packages enable explorationists to directly apply their geologic expertise when developing and fine-tuning models for identifying new prospects and for extending producing fields. As a result of these features, microcomputers and interactive modeling software are becoming common tools in many exploration offices. Gravity and magnetics software programs illustrate some of the capabilities of such exploration tools.« less

  2. MAGNAMWAR: an R package for genome-wide association studies of bacterial orthologs.

    PubMed

    Sexton, Corinne E; Smith, Hayden Z; Newell, Peter D; Douglas, Angela E; Chaston, John M

    2018-06-01

    Here we report on an R package for genome-wide association studies of orthologous genes in bacteria. Before using the software, orthologs from bacterial genomes or metagenomes are defined using local or online implementations of OrthoMCL. These presence-absence patterns are statistically associated with variation in user-collected phenotypes using the Mono-Associated GNotobiotic Animals Metagenome-Wide Association R package (MAGNAMWAR). Genotype-phenotype associations can be performed with several different statistical tests based on the type and distribution of the data. MAGNAMWAR is available on CRAN. john_chaston@byu.edu.

  3. tscvh R Package: Computational of the two samples test on microarray-sequencing data

    NASA Astrophysics Data System (ADS)

    Fajriyah, Rohmatul; Rosadi, Dedi

    2017-12-01

    We present a new R package, a tscvh (two samples cross-variance homogeneity), as we called it. This package is a software of the cross-variance statistical test which has been proposed and introduced by Fajriyah ([3] and [4]), based on the cross-variance concept. The test can be used as an alternative test for the significance difference between two means when sample size is small, the situation which is usually appeared in the bioinformatics research. Based on its statistical distribution, the p-value can be also provided. The package is built under a homogeneity of variance between samples.

  4. Play It Again: Teaching Statistics with Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Sigal, Matthew J.; Chalmers, R. Philip

    2016-01-01

    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  5. EQS Goes R: Simulations for SEM Using the Package REQS

    ERIC Educational Resources Information Center

    Mair, Patrick; Wu, Eric; Bentler, Peter M.

    2010-01-01

    The REQS package is an interface between the R environment of statistical computing and the EQS software for structural equation modeling. The package consists of 3 main functions that read EQS script files and import the results into R, call EQS script files from R, and run EQS script files from R and import the results after EQS computations.…

  6. Is liver perfusion CT reproducible? A study on intra- and interobserver agreement of normal hepatic haemodynamic parameters obtained with two different software packages.

    PubMed

    Bretas, Elisa Almeida Sathler; Torres, Ulysses S; Torres, Lucas Rios; Bekhor, Daniel; Saito Filho, Celso Fernando; Racy, Douglas Jorge; Faggioni, Lorenzo; D'Ippolito, Giuseppe

    2017-10-01

    To evaluate the agreement between the measurements of perfusion CT parameters in normal livers by using two different software packages. This retrospective study was based on 78 liver perfusion CT examinations acquired for detecting suspected liver metastasis. Patients with any morphological or functional hepatic abnormalities were excluded. The final analysis included 37 patients (59.7 ± 14.9 y). Two readers (1 and 2) independently measured perfusion parameters using different software packages from two major manufacturers (A and B). Arterial perfusion (AP) and portal perfusion (PP) were determined using the dual-input vascular one-compartmental model. Inter-reader agreement for each package and intrareader agreement between both packages were assessed with intraclass correlation coefficients (ICC) and Bland-Altman statistics. Inter-reader agreement was substantial for AP using software A (ICC = 0.82) and B (ICC = 0.85-0.86), fair for PP using software A (ICC = 0.44) and fair to moderate for PP using software B (ICC = 0.56-0.77). Intrareader agreement between software A and B ranged from slight to moderate (ICC = 0.32-0.62) for readers 1 and 2 considering the AP parameters, and from fair to moderate (ICC = 0.40-0.69) for readers 1 and 2 considering the PP parameters. At best there was only moderate agreement between both software packages, resulting in some uncertainty and suboptimal reproducibility. Advances in knowledge: Software-dependent factors may contribute to variance in perfusion measurements, demanding further technical improvements. AP measurements seem to be the most reproducible parameter to be adopted when evaluating liver perfusion CT.

  7. SEDA: A software package for the Statistical Earthquake Data Analysis

    NASA Astrophysics Data System (ADS)

    Lombardi, A. M.

    2017-03-01

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package.

  8. SEDA: A software package for the Statistical Earthquake Data Analysis

    PubMed Central

    Lombardi, A. M.

    2017-01-01

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package. PMID:28290482

  9. Effectiveness of Simulation in a Hybrid and Online Networking Course.

    ERIC Educational Resources Information Center

    Cameron, Brian H.

    2003-01-01

    Reports on a study that compares the performance of students enrolled in two sections of a Web-based computer networking course: one utilizing a simulation package and the second utilizing a static, graphical software package. Analysis shows statistically significant improvements in performance in the simulation group compared to the…

  10. Technological Tools in the Introductory Statistics Classroom: Effects on Student Understanding of Inferential Statistics

    ERIC Educational Resources Information Center

    Meletiou-Mavrotheris, Maria

    2004-01-01

    While technology has become an integral part of introductory statistics courses, the programs typically employed are professional packages designed primarily for data analysis rather than for learning. Findings from several studies suggest that use of such software in the introductory statistics classroom may not be very effective in helping…

  11. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  12. Open-source Software for Exoplanet Atmospheric Modeling

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  13. smwrGraphs—An R package for graphing hydrologic data, version 1.1.2

    USGS Publications Warehouse

    Lorenz, David L.; Diekoff, Aliesha L.

    2017-01-31

    This report describes an R package called smwrGraphs, which consists of a collection of graphing functions for hydrologic data within R, a programming language and software environment for statistical computing. The functions in the package have been developed by the U.S. Geological Survey to create high-quality graphs for publication or presentation of hydrologic data that meet U.S. Geological Survey graphics guidelines.

  14. Teacher's Corner: Structural Equation Modeling with the Sem Package in R

    ERIC Educational Resources Information Center

    Fox, John

    2006-01-01

    R is free, open-source, cooperatively developed software that implements the S statistical programming language and computing environment. The current capabilities of R are extensive, and it is in wide use, especially among statisticians. The sem package provides basic structural equation modeling facilities in R, including the ability to fit…

  15. Information Technology.

    ERIC Educational Resources Information Center

    Marcum, Deanna; Boss, Richard

    1983-01-01

    Relates office automation to its application in libraries, discussing computer software packages for microcomputers performing tasks involved in word processing, accounting, statistical analysis, electronic filing cabinets, and electronic mail systems. (EJS)

  16. R-Based Software for the Integration of Pathway Data into Bioinformatic Algorithms

    PubMed Central

    Kramer, Frank; Bayerlová, Michaela; Beißbarth, Tim

    2014-01-01

    Putting new findings into the context of available literature knowledge is one approach to deal with the surge of high-throughput data results. Furthermore, prior knowledge can increase the performance and stability of bioinformatic algorithms, for example, methods for network reconstruction. In this review, we examine software packages for the statistical computing framework R, which enable the integration of pathway data for further bioinformatic analyses. Different approaches to integrate and visualize pathway data are identified and packages are stratified concerning their features according to a number of different aspects: data import strategies, the extent of available data, dependencies on external tools, integration with further analysis steps and visualization options are considered. A total of 12 packages integrating pathway data are reviewed in this manuscript. These are supplemented by five R-specific packages for visualization and six connector packages, which provide access to external tools. PMID:24833336

  17. dartr: An r package to facilitate analysis of SNP data generated from reduced representation genome sequencing.

    PubMed

    Gruber, Bernd; Unmack, Peter J; Berry, Oliver F; Georges, Arthur

    2018-05-01

    Although vast technological advances have been made and genetic software packages are growing in number, it is not a trivial task to analyse SNP data. We announce a new r package, dartr, enabling the analysis of single nucleotide polymorphism data for population genomic and phylogenomic applications. dartr provides user-friendly functions for data quality control and marker selection, and permits rigorous evaluations of conformation to Hardy-Weinberg equilibrium, gametic-phase disequilibrium and neutrality. The package reports standard descriptive statistics, permits exploration of patterns in the data through principal components analysis and conducts standard F-statistics, as well as basic phylogenetic analyses, population assignment, isolation by distance and exports data to a variety of commonly used downstream applications (e.g., newhybrids, faststructure and phylogeny applications) outside of the r environment. The package serves two main purposes: first, a user-friendly approach to lower the hurdle to analyse such data-therefore, the package comes with a detailed tutorial targeted to the r beginner to allow data analysis without requiring deep knowledge of r. Second, we use a single, well-established format-genlight from the adegenet package-as input for all our functions to avoid data reformatting. By strictly using the genlight format, we hope to facilitate this format as the de facto standard of future software developments and hence reduce the format jungle of genetic data sets. The dartr package is available via the r CRAN network and GitHub. © 2017 John Wiley & Sons Ltd.

  18. Contrast Analysis: A Tutorial

    ERIC Educational Resources Information Center

    Haans, Antal

    2018-01-01

    Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…

  19. MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories.

    PubMed

    McGibbon, Robert T; Beauchamp, Kyle A; Harrigan, Matthew P; Klein, Christoph; Swails, Jason M; Hernández, Carlos X; Schwantes, Christian R; Wang, Lee-Ping; Lane, Thomas J; Pande, Vijay S

    2015-10-20

    As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories

    PubMed Central

    McGibbon, Robert T.; Beauchamp, Kyle A.; Harrigan, Matthew P.; Klein, Christoph; Swails, Jason M.; Hernández, Carlos X.; Schwantes, Christian R.; Wang, Lee-Ping; Lane, Thomas J.; Pande, Vijay S.

    2015-01-01

    As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. PMID:26488642

  1. AnthropMMD: An R package with a graphical user interface for the mean measure of divergence.

    PubMed

    Santos, Frédéric

    2018-01-01

    The mean measure of divergence is a dissimilarity measure between groups of individuals described by dichotomous variables. It is well suited to datasets with many missing values, and it is generally used to compute distance matrices and represent phenograms. Although often used in biological anthropology and archaeozoology, this method suffers from a lack of implementation in common statistical software. A package for the R statistical software, AnthropMMD, is presented here. Offering a dynamic graphical user interface, it is the first one dedicated to Smith's mean measure of divergence. The package also provides facilities for graphical representations and the crucial step of trait selection, so that the entire analysis can be performed through the graphical user interface. Its use is demonstrated using an artificial dataset, and the impact of trait selection is discussed. Finally, AnthropMMD is compared to three other free tools available for calculating the mean measure of divergence, and is proven to be consistent with them. © 2017 Wiley Periodicals, Inc.

  2. MPTinR: analysis of multinomial processing tree models in R.

    PubMed

    Singmann, Henrik; Kellen, David

    2013-06-01

    We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .

  3. Proposing a Mathematical Software Tool in Physics Secondary Education

    ERIC Educational Resources Information Center

    Baltzis, Konstantinos B.

    2009-01-01

    MathCad® is a very popular software tool for mathematical and statistical analysis in science and engineering. Its low cost, ease of use, extensive function library, and worksheet-like user interface distinguish it among other commercial packages. Its features are also well suited to educational process. The use of natural mathematical notation…

  4. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  5. cit: hypothesis testing software for mediation analysis in genomic applications.

    PubMed

    Millstein, Joshua; Chen, Gary K; Breton, Carrie V

    2016-08-01

    The challenges of successfully applying causal inference methods include: (i) satisfying underlying assumptions, (ii) limitations in data/models accommodated by the software and (iii) low power of common multiple testing approaches. The causal inference test (CIT) is based on hypothesis testing rather than estimation, allowing the testable assumptions to be evaluated in the determination of statistical significance. A user-friendly software package provides P-values and optionally permutation-based FDR estimates (q-values) for potential mediators. It can handle single and multiple binary and continuous instrumental variables, binary or continuous outcome variables and adjustment covariates. Also, the permutation-based FDR option provides a non-parametric implementation. Simulation studies demonstrate the validity of the cit package and show a substantial advantage of permutation-based FDR over other common multiple testing strategies. The cit open-source R package is freely available from the CRAN website (https://cran.r-project.org/web/packages/cit/index.html) with embedded C ++ code that utilizes the GNU Scientific Library, also freely available (http://www.gnu.org/software/gsl/). joshua.millstein@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Explorations in Statistics: Standard Deviations and Standard Errors

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  7. PsyToolkit: a software package for programming psychological experiments using Linux.

    PubMed

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  8. User’s guide for MapMark4GUI—A graphical user interface for the MapMark4 R package

    USGS Publications Warehouse

    Shapiro, Jason

    2018-05-29

    MapMark4GUI is an R graphical user interface (GUI) developed by the U.S. Geological Survey to support user implementation of the MapMark4 R statistical software package. MapMark4 was developed by the U.S. Geological Survey to implement probability calculations for simulating undiscovered mineral resources in quantitative mineral resource assessments. The GUI provides an easy-to-use tool to input data, run simulations, and format output results for the MapMark4 package. The GUI is written and accessed in the R statistical programming language. This user’s guide includes instructions on installing and running MapMark4GUI and descriptions of the statistical output processes, output files, and test data files.

  9. DMRfinder: efficiently identifying differentially methylated regions from MethylC-seq data.

    PubMed

    Gaspar, John M; Hart, Ronald P

    2017-11-29

    DNA methylation is an epigenetic modification that is studied at a single-base resolution with bisulfite treatment followed by high-throughput sequencing. After alignment of the sequence reads to a reference genome, methylation counts are analyzed to determine genomic regions that are differentially methylated between two or more biological conditions. Even though a variety of software packages is available for different aspects of the bioinformatics analysis, they often produce results that are biased or require excessive computational requirements. DMRfinder is a novel computational pipeline that identifies differentially methylated regions efficiently. Following alignment, DMRfinder extracts methylation counts and performs a modified single-linkage clustering of methylation sites into genomic regions. It then compares methylation levels using beta-binomial hierarchical modeling and Wald tests. Among its innovative attributes are the analyses of novel methylation sites and methylation linkage, as well as the simultaneous statistical analysis of multiple sample groups. To demonstrate its efficiency, DMRfinder is benchmarked against other computational approaches using a large published dataset. Contrasting two replicates of the same sample yielded minimal genomic regions with DMRfinder, whereas two alternative software packages reported a substantial number of false positives. Further analyses of biological samples revealed fundamental differences between DMRfinder and another software package, despite the fact that they utilize the same underlying statistical basis. For each step, DMRfinder completed the analysis in a fraction of the time required by other software. Among the computational approaches for identifying differentially methylated regions from high-throughput bisulfite sequencing datasets, DMRfinder is the first that integrates all the post-alignment steps in a single package. Compared to other software, DMRfinder is extremely efficient and unbiased in this process. DMRfinder is free and open-source software, available on GitHub ( github.com/jsh58/DMRfinder ); it is written in Python and R, and is supported on Linux.

  10. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  11. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  12. Reference datasets for 2-treatment, 2-sequence, 2-period bioequivalence studies.

    PubMed

    Schütz, Helmut; Labes, Detlew; Fuglsang, Anders

    2014-11-01

    It is difficult to validate statistical software used to assess bioequivalence since very few datasets with known results are in the public domain, and the few that are published are of moderate size and balanced. The purpose of this paper is therefore to introduce reference datasets of varying complexity in terms of dataset size and characteristics (balance, range, outlier presence, residual error distribution) for 2-treatment, 2-period, 2-sequence bioequivalence studies and to report their point estimates and 90% confidence intervals which companies can use to validate their installations. The results for these datasets were calculated using the commercial packages EquivTest, Kinetica, SAS and WinNonlin, and the non-commercial package R. The results of three of these packages mostly agree, but imbalance between sequences seems to provoke questionable results with one package, which illustrates well the need for proper software validation.

  13. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  14. CSAM Metrology Software Tool

    NASA Technical Reports Server (NTRS)

    Vu, Duc; Sandor, Michael; Agarwal, Shri

    2005-01-01

    CSAM Metrology Software Tool (CMeST) is a computer program for analysis of false-color CSAM images of plastic-encapsulated microcircuits. (CSAM signifies C-mode scanning acoustic microscopy.) The colors in the images indicate areas of delamination within the plastic packages. Heretofore, the images have been interpreted by human examiners. Hence, interpretations have not been entirely consistent and objective. CMeST processes the color information in image-data files to detect areas of delamination without incurring inconsistencies of subjective judgement. CMeST can be used to create a database of baseline images of packages acquired at given times for comparison with images of the same packages acquired at later times. Any area within an image can be selected for analysis, which can include examination of different delamination types by location. CMeST can also be used to perform statistical analyses of image data. Results of analyses are available in a spreadsheet format for further processing. The results can be exported to any data-base-processing software.

  15. Implementation of building information modeling in Malaysian construction industry

    NASA Astrophysics Data System (ADS)

    Memon, Aftab Hameed; Rahman, Ismail Abdul; Harman, Nur Melly Edora

    2014-10-01

    This study has assessed the implementation level of Building Information Modeling (BIM) in the construction industry of Malaysia. It also investigated several computer software packages facilitating BIM and challenges affecting its implementation. Data collection for this study was carried out using questionnaire survey among the construction practitioners. 95 completed forms of questionnaire received against 150 distributed questionnaire sets from consultant, contractor and client organizations were analyzed statistically. Analysis findings indicated that the level of implementation of BIM in the construction industry of Malaysia is very low. Average index method employed to assess the effectiveness of various software packages of BIM highlighted that Bentley construction, AutoCAD and ArchiCAD are three most popular and effective software packages. Major challenges to BIM implementation are it requires enhanced collaboration, add work to a designer, interoperability and needs enhanced collaboration. For improving the level of implementing BIM in Malaysian industry, it is recommended that a flexible training program of BIM for all practitioners must be created.

  16. Melanie II--a third-generation software package for analysis of two-dimensional electrophoresis images: I. Features and user interface.

    PubMed

    Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F

    1997-12-01

    Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.

  17. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data

    PubMed Central

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages. PMID:21253357

  18. SPARSKIT: A basic tool kit for sparse matrix computations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1990-01-01

    Presented here are the main features of a tool package for manipulating and working with sparse matrices. One of the goals of the package is to provide basic tools to facilitate the exchange of software and data between researchers in sparse matrix computations. The starting point is the Harwell/Boeing collection of matrices for which the authors provide a number of tools. Among other things, the package provides programs for converting data structures, printing simple statistics on a matrix, plotting a matrix profile, and performing linear algebra operations with sparse matrices.

  19. Clonality: an R package for testing clonal relatedness of two tumors from the same patient based on their genomic profiles.

    PubMed

    Ostrovnaya, Irina; Seshan, Venkatraman E; Olshen, Adam B; Begg, Colin B

    2011-06-15

    If a cancer patient develops multiple tumors, it is sometimes impossible to determine whether these tumors are independent or clonal based solely on pathological characteristics. Investigators have studied how to improve this diagnostic challenge by comparing the presence of loss of heterozygosity (LOH) at selected genetic locations of tumor samples, or by comparing genomewide copy number array profiles. We have previously developed statistical methodology to compare such genomic profiles for an evidence of clonality. We assembled the software for these tests in a new R package called 'Clonality'. For LOH profiles, the package contains significance tests. The analysis of copy number profiles includes a likelihood ratio statistic and reference distribution, as well as an option to produce various plots that summarize the results. Bioconductor (http://bioconductor.org/packages/release/bioc/html/Clonality.html) and http://www.mskcc.org/mskcc/html/13287.cfm.

  20. 2 × 2 Tables: a note on Campbell's recommendation.

    PubMed

    Busing, F M T A; Weaver, B; Dubois, S

    2016-04-15

    For 2 × 2 tables, Egon Pearson's N - 1 chi-squared statistic is theoretically more sound than Karl Pearson's chi-squared statistic, and provides more accurate p values. Moreover, Egon Pearson's N - 1 chi-squared statistic is equal to the Mantel-Haenszel chi-squared statistic for a single 2 × 2 table, and as such, is often available in statistical software packages like SPSS, SAS, Stata, or R, which facilitates compliance with Ian Campbell's recommendations. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  2. Construction and comparative evaluation of different activity detection methods in brain FDG-PET.

    PubMed

    Buchholz, Hans-Georg; Wenzel, Fabian; Gartenschläger, Martin; Thiele, Frank; Young, Stewart; Reuss, Stefan; Schreckenberger, Mathias

    2015-08-18

    We constructed and evaluated reference brain FDG-PET databases for usage by three software programs (Computer-aided diagnosis for dementia (CAD4D), Statistical Parametric Mapping (SPM) and NEUROSTAT), which allow a user-independent detection of dementia-related hypometabolism in patients' brain FDG-PET. Thirty-seven healthy volunteers were scanned in order to construct brain FDG reference databases, which reflect the normal, age-dependent glucose consumption in human brain, using either software. Databases were compared to each other to assess the impact of different stereotactic normalization algorithms used by either software package. In addition, performance of the new reference databases in the detection of altered glucose consumption in the brains of patients was evaluated by calculating statistical maps of regional hypometabolism in FDG-PET of 20 patients with confirmed Alzheimer's dementia (AD) and of 10 non-AD patients. Extent (hypometabolic volume referred to as cluster size) and magnitude (peak z-score) of detected hypometabolism was statistically analyzed. Differences between the reference databases built by CAD4D, SPM or NEUROSTAT were observed. Due to the different normalization methods, altered spatial FDG patterns were found. When analyzing patient data with the reference databases created using CAD4D, SPM or NEUROSTAT, similar characteristic clusters of hypometabolism in the same brain regions were found in the AD group with either software. However, larger z-scores were observed with CAD4D and NEUROSTAT than those reported by SPM. Better concordance with CAD4D and NEUROSTAT was achieved using the spatially normalized images of SPM and an independent z-score calculation. The three software packages identified the peak z-scores in the same brain region in 11 of 20 AD cases, and there was concordance between CAD4D and SPM in 16 AD subjects. The clinical evaluation of brain FDG-PET of 20 AD patients with either CAD4D-, SPM- or NEUROSTAT-generated databases from an identical reference dataset showed similar patterns of hypometabolism in the brain regions known to be involved in AD. The extent of hypometabolism and peak z-score appeared to be influenced by the calculation method used in each software package rather than by different spatial normalization parameters.

  3. Bayesian Hierarchical Random Effects Models in Forensic Science.

    PubMed

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  4. An automated approach to Litchfield and Wilcoxon's evaluation of dose–effect experiments using the R package LW1949

    USGS Publications Warehouse

    Adams, Jean V.; Slaght, Karen; Boogaard, Michael A.

    2016-01-01

    The authors developed a package, LW1949, for use with the statistical software R to automatically carry out the manual steps of Litchfield and Wilcoxon's method of evaluating dose–effect experiments. The LW1949 package consistently finds the best fitting dose–effect relation by minimizing the chi-squared statistic of the observed and expected number of affected individuals and substantially speeds up the line-fitting process and other calculations that Litchfield and Wilcoxon originally carried out by hand. Environ Toxicol Chem 2016;9999:1–4. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US Government work and, as such, is in the public domain in the United States of America.

  5. An open-source software package for multivariate modeling and clustering: applications to air quality management.

    PubMed

    Wang, Xiuquan; Huang, Guohe; Zhao, Shan; Guo, Junhong

    2015-09-01

    This paper presents an open-source software package, rSCA, which is developed based upon a stepwise cluster analysis method and serves as a statistical tool for modeling the relationships between multiple dependent and independent variables. The rSCA package is efficient in dealing with both continuous and discrete variables, as well as nonlinear relationships between the variables. It divides the sample sets of dependent variables into different subsets (or subclusters) through a series of cutting and merging operations based upon the theory of multivariate analysis of variance (MANOVA). The modeling results are given by a cluster tree, which includes both intermediate and leaf subclusters as well as the flow paths from the root of the tree to each leaf subcluster specified by a series of cutting and merging actions. The rSCA package is a handy and easy-to-use tool and is freely available at http://cran.r-project.org/package=rSCA . By applying the developed package to air quality management in an urban environment, we demonstrate its effectiveness in dealing with the complicated relationships among multiple variables in real-world problems.

  6. Regression methods for spatially correlated data: an example using beetle attacks in a seed orchard

    Treesearch

    Preisler Haiganoush; Nancy G. Rappaport; David L. Wood

    1997-01-01

    We present a statistical procedure for studying the simultaneous effects of observed covariates and unmeasured spatial variables on responses of interest. The procedure uses regression type analyses that can be used with existing statistical software packages. An example using the rate of twig beetle attacks on Douglas-fir trees in a seed orchard illustrates the...

  7. Comparison of Absolute Apparent Diffusion Coefficient (ADC) Values in ADC Maps Generated Across Different Postprocessing Software: Reproducibility in Endometrial Carcinoma.

    PubMed

    Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan

    2017-12-01

    Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.

  8. Computerized Cognition Laboratory.

    ERIC Educational Resources Information Center

    Motes, Michael A.; Wiegmann, Douglas A.

    1999-01-01

    Describes a software package entitled the "Computerized Cognition Laboratory" that helps integrate the teaching of cognitive psychology and research methods. Allows students to explore short-term memory, long-term memory, and decision making. Can also be used to teach the application of several statistical procedures. (DSK)

  9. Statistical software applications used in health services research: analysis of published studies in the U.S

    PubMed Central

    2011-01-01

    Background This study aims to identify the statistical software applications most commonly employed for data analysis in health services research (HSR) studies in the U.S. The study also examines the extent to which information describing the specific analytical software utilized is provided in published articles reporting on HSR studies. Methods Data were extracted from a sample of 1,139 articles (including 877 original research articles) published between 2007 and 2009 in three U.S. HSR journals, that were considered to be representative of the field based upon a set of selection criteria. Descriptive analyses were conducted to categorize patterns in statistical software usage in those articles. The data were stratified by calendar year to detect trends in software use over time. Results Only 61.0% of original research articles in prominent U.S. HSR journals identified the particular type of statistical software application used for data analysis. Stata and SAS were overwhelmingly the most commonly used software applications employed (in 46.0% and 42.6% of articles respectively). However, SAS use grew considerably during the study period compared to other applications. Stratification of the data revealed that the type of statistical software used varied considerably by whether authors were from the U.S. or from other countries. Conclusions The findings highlight a need for HSR investigators to identify more consistently the specific analytical software used in their studies. Knowing that information can be important, because different software packages might produce varying results, owing to differences in the software's underlying estimation methods. PMID:21977990

  10. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  11. Picante: R tools for integrating phylogenies and ecology.

    PubMed

    Kembel, Steven W; Cowan, Peter D; Helmus, Matthew R; Cornwell, William K; Morlon, Helene; Ackerly, David D; Blomberg, Simon P; Webb, Campbell O

    2010-06-01

    Picante is a software package that provides a comprehensive set of tools for analyzing the phylogenetic and trait diversity of ecological communities. The package calculates phylogenetic diversity metrics, performs trait comparative analyses, manipulates phenotypic and phylogenetic data, and performs tests for phylogenetic signal in trait distributions, community structure and species interactions. Picante is a package for the R statistical language and environment written in R and C, released under a GPL v2 open-source license, and freely available on the web (http://picante.r-forge.r-project.org) and from CRAN (http://cran.r-project.org).

  12. diffHic: a Bioconductor package to detect differential genomic interactions in Hi-C data.

    PubMed

    Lun, Aaron T L; Smyth, Gordon K

    2015-08-19

    Chromatin conformation capture with high-throughput sequencing (Hi-C) is a technique that measures the in vivo intensity of interactions between all pairs of loci in the genome. Most conventional analyses of Hi-C data focus on the detection of statistically significant interactions. However, an alternative strategy involves identifying significant changes in the interaction intensity (i.e., differential interactions) between two or more biological conditions. This is more statistically rigorous and may provide more biologically relevant results. Here, we present the diffHic software package for the detection of differential interactions from Hi-C data. diffHic provides methods for read pair alignment and processing, counting into bin pairs, filtering out low-abundance events and normalization of trended or CNV-driven biases. It uses the statistical framework of the edgeR package to model biological variability and to test for significant differences between conditions. Several options for the visualization of results are also included. The use of diffHic is demonstrated with real Hi-C data sets. Performance against existing methods is also evaluated with simulated data. On real data, diffHic is able to successfully detect interactions with significant differences in intensity between biological conditions. It also compares favourably to existing software tools on simulated data sets. These results suggest that diffHic is a viable approach for differential analyses of Hi-C data.

  13. LFSTAT - Low-Flow Analysis in R

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor

    2013-04-01

    The calculation of characteristic stream flow during dry conditions is a basic requirement for many problems in hydrology, ecohydrology and water resources management. As opposed to floods, a number of different indices are used to characterise low flows and streamflow droughts. Although these indices and methods of calculation have been well documented in the WMO Manual on Low-flow Estimation and Prediction [1], a comprehensive software was missing which enables a fast and standardized calculation of low flow statistics. We present the new software package lfstat to fill in this obvious gap. Our software package is based on the statistical open source software R, and expands it to analyse daily stream flow data records focusing on low-flows. As command-line based programs are not everyone's preference, we also offer a plug-in for the R-Commander, an easy to use graphical user interface (GUI) provided for R which is based on tcl/tk. The functionality of lfstat includes estimation methods for low-flow indices, extreme value statistics, deficit characteristics, and additional graphical methods to control the computation of complex indices and to illustrate the data. Beside the basic low flow indices, the baseflow index and recession constants can be computed. For extreme value statistics, state-of-the-art methods for L-moment based local and regional frequency analysis (RFA) are available. The tools for deficit characteristics include various pooling and threshold selection methods to support the calculation of drought duration and deficit indices. The most common graphics for low flow analysis are available, and the plots can be modified according to the user preferences. Graphics include hydrographs for different periods, flexible streamflow deficit plots, baseflow visualisation, recession diagnostic, flow duration curves as well as double mass curves, and many more. From a technical point of view, the package uses a S3-class called lfobj (low-flow objects). This objects are usual R-data-frames including date, flow, hydrological year and possibly baseflow information. Once these objects are created, analysis can be performed by mouse-click and a script can be saved to make the analysis easily reproducible. At the moment we are offering implementation of all major methods proposed in the WMO manual on Low-flow Estimation and Predictions [1]. Future plans include a dynamic low flow report in odt-file format using odf-weave which allows automatic updates if data or analysis change. We hope to offer a tool to ease and structure the analysis of stream flow data focusing on low-flows and to make analysis transparent and communicable. The package can also be used in teaching students the first steps in low-flow hydrology. The software packages can be installed from CRAN (latest stable) and R-Forge: http://r-forge.r-project.org (development version). References: [1] Gustard, Alan; Demuth, Siegfried, (eds.) Manual on Low-flow Estimation and Prediction. Geneva, Switzerland, World Meteorological Organization, (Operational Hydrology Report No. 50, WMO-No. 1029).

  14. The Computer: An Effective Research Assistant

    PubMed Central

    Gancher, Wendy

    1984-01-01

    The development of software packages such as data management systems and statistical packages has made it possible to process large amounts of research data. Data management systems make the organization and manipulation of such data easier. Floppy disks ease the problem of storing and retrieving records. Patient information can be kept confidential by limiting access to computer passwords linked with research files, or by using floppy disks. These attributes make the microcomputer essential to modern primary care research. PMID:21279042

  15. The High Cost of Complexity in Experimental Design and Data Analysis: Type I and Type II Error Rates in Multiway ANOVA.

    ERIC Educational Resources Information Center

    Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.

    2002-01-01

    Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…

  16. GenoMatrix: A Software Package for Pedigree-Based and Genomic Prediction Analyses on Complex Traits.

    PubMed

    Nazarian, Alireza; Gezan, Salvador Alejandro

    2016-07-01

    Genomic and pedigree-based best linear unbiased prediction methodologies (G-BLUP and P-BLUP) have proven themselves efficient for partitioning the phenotypic variance of complex traits into its components, estimating the individuals' genetic merits, and predicting unobserved (or yet-to-be observed) phenotypes in many species and fields of study. The GenoMatrix software, presented here, is a user-friendly package to facilitate the process of using genome-wide marker data and parentage information for G-BLUP and P-BLUP analyses on complex traits. It provides users with a collection of applications which help them on a set of tasks from performing quality control on data to constructing and manipulating the genomic and pedigree-based relationship matrices and obtaining their inverses. Such matrices will be then used in downstream analyses by other statistical packages. The package also enables users to obtain predicted values for unobserved individuals based on the genetic values of observed related individuals. GenoMatrix is available to the research community as a Windows 64bit executable and can be downloaded free of charge at: http://compbio.ufl.edu/software/genomatrix/. © The American Genetic Association. 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    PubMed

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  18. BEAGLE: An Application Programming Interface and High-Performance Computing Library for Statistical Phylogenetics

    PubMed Central

    Ayres, Daniel L.; Darling, Aaron; Zwickl, Derrick J.; Beerli, Peter; Holder, Mark T.; Lewis, Paul O.; Huelsenbeck, John P.; Ronquist, Fredrik; Swofford, David L.; Cummings, Michael P.; Rambaut, Andrew; Suchard, Marc A.

    2012-01-01

    Abstract Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software. PMID:21963610

  19. Nondestructive Evaluation (NDE) Technology Initiatives (NTIP). Delivery Order 0039: Statistical Comparison of Competing Material Models

    DTIC Science & Technology

    2003-01-01

    adapted from Kass and Rafferty (1995) and Congdon (2001). Page 10 of 57 density adjusted for resin content, z, since resin contributes to the density...c.f.: Congdon , 2001). How to Download the WinBUGS Software Package BUGS was originally a statistical research project at the Medical Research...Likelihood Estimation,” July 2002, working paper to be published. 18) Congdon , Peter, Bayesian Statistical Modeling, Wiley, 2001 19) Cox, D. R. and

  20. Sharing brain mapping statistical results with the neuroimaging data model

    PubMed Central

    Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J.; Helmer, Karl G.; Jenkinson, Mark; Keator, David B.; Nichols, B. Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E.

    2016-01-01

    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html. PMID:27922621

  1. Statistical methods and computing for big data.

    PubMed

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  2. Statistical methods and computing for big data

    PubMed Central

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  3. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 2. Explicit Solvent Particle Mesh Ewald.

    PubMed

    Salomon-Ferrer, Romelia; Götz, Andreas W; Poole, Duncan; Le Grand, Scott; Walker, Ross C

    2013-09-10

    We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Götz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.

  4. P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.

    P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less

  5. GAPIT: genome association and prediction integrated tool.

    PubMed

    Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu

    2012-09-15

    Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.

  6. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  7. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  8. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  9. scoringRules - A software package for probabilistic model evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian

    2016-04-01

    Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.

  10. Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software

    NASA Astrophysics Data System (ADS)

    Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph

    1995-06-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  11. The use of applied software for the professional training of students studying humanities

    NASA Astrophysics Data System (ADS)

    Sadchikova, A. S.; Rodin, M. M.

    2017-01-01

    Research practice is an integral part of humanities students' training process. In this regard the training process is to include modern information techniques of the training process of students studying humanities. This paper examines the most popular applied software products used for data processing in social science. For testing purposes we selected the most commonly preferred professional packages: MS Excel, IBM SPSS Statistics, STATISTICA, STADIA. Moreover the article contains testing results of a specialized software Prikladnoy Sotsiolog that is applicable for the preparation stage of the research. The specialised software were tested during one term in groups of students studying humanities.

  12. Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring

    DOE PAGES

    Farrar, Charles R.; Allen, David W.; Park, Gyuhae; ...

    2006-01-01

    The process of implementing a damage detection strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM). The authors' approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1) Operational Evaluation, (2) Data Acquisition and Cleansing, (3) Feature Extraction and Data Compression, and (4) Statistical Model Development for Feature Discrimination. These processes must be implemented through hardware or software and, in general, some combination of these two approaches will be used. This paper will discussmore » each portion of the SHM process with particular emphasis on the coupling of a general purpose data interrogation software package for structural health monitoring with a modular wireless sensing and processing platform. More specifically, this paper will address the need to take an integrated hardware/software approach to developing SHM solutions.« less

  13. Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.

    PubMed

    Su, Yu-Xuan; Tu, Yu-Kang

    2018-05-22

    Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.

  14. Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS.

    PubMed

    Zaidan, A A; Zaidan, B B; Al-Haiqi, Ahmed; Kiah, M L M; Hussain, Muzammil; Abdulnabi, Mohamed

    2015-02-01

    Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. KMWin--a convenient tool for graphical presentation of results from Kaplan-Meier survival time analysis.

    PubMed

    Gross, Arnd; Ziepert, Marita; Scholz, Markus

    2012-01-01

    Analysis of clinical studies often necessitates multiple graphical representations of the results. Many professional software packages are available for this purpose. Most packages are either only commercially available or hard to use especially if one aims to generate or customize a huge number of similar graphical outputs. We developed a new, freely available software tool called KMWin (Kaplan-Meier for Windows) facilitating Kaplan-Meier survival time analysis. KMWin is based on the statistical software environment R and provides an easy to use graphical interface. Survival time data can be supplied as SPSS (sav), SAS export (xpt) or text file (dat), which is also a common export format of other applications such as Excel. Figures can directly be exported in any graphical file format supported by R. On the basis of a working example, we demonstrate how to use KMWin and present its main functions. We show how to control the interface, customize the graphical output, and analyse survival time data. A number of comparisons are performed between KMWin and SPSS regarding graphical output, statistical output, data management and development. Although the general functionality of SPSS is larger, KMWin comprises a number of features useful for survival time analysis in clinical trials and other applications. These are for example number of cases and number of cases under risk within the figure or provision of a queue system for repetitive analyses of updated data sets. Moreover, major adjustments of graphical settings can be performed easily on a single window. We conclude that our tool is well suited and convenient for repetitive analyses of survival time data. It can be used by non-statisticians and provides often used functions as well as functions which are not supplied by standard software packages. The software is routinely applied in several clinical study groups.

  16. KMWin – A Convenient Tool for Graphical Presentation of Results from Kaplan-Meier Survival Time Analysis

    PubMed Central

    Gross, Arnd; Ziepert, Marita; Scholz, Markus

    2012-01-01

    Background Analysis of clinical studies often necessitates multiple graphical representations of the results. Many professional software packages are available for this purpose. Most packages are either only commercially available or hard to use especially if one aims to generate or customize a huge number of similar graphical outputs. We developed a new, freely available software tool called KMWin (Kaplan-Meier for Windows) facilitating Kaplan-Meier survival time analysis. KMWin is based on the statistical software environment R and provides an easy to use graphical interface. Survival time data can be supplied as SPSS (sav), SAS export (xpt) or text file (dat), which is also a common export format of other applications such as Excel. Figures can directly be exported in any graphical file format supported by R. Results On the basis of a working example, we demonstrate how to use KMWin and present its main functions. We show how to control the interface, customize the graphical output, and analyse survival time data. A number of comparisons are performed between KMWin and SPSS regarding graphical output, statistical output, data management and development. Although the general functionality of SPSS is larger, KMWin comprises a number of features useful for survival time analysis in clinical trials and other applications. These are for example number of cases and number of cases under risk within the figure or provision of a queue system for repetitive analyses of updated data sets. Moreover, major adjustments of graphical settings can be performed easily on a single window. Conclusions We conclude that our tool is well suited and convenient for repetitive analyses of survival time data. It can be used by non-statisticians and provides often used functions as well as functions which are not supplied by standard software packages. The software is routinely applied in several clinical study groups. PMID:22723912

  17. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  18. The Equivalence of Three Statistical Packages for Performing Hierarchical Cluster Analysis

    ERIC Educational Resources Information Center

    Blashfield, Roger

    1977-01-01

    Three different software programs which contain hierarchical agglomerative cluster analysis procedures were shown to generate different solutions on the same data set using apparently the same options. The basis for the differences in the solutions was the formulae used to calculate Euclidean distance. (Author/JKS)

  19. Cellular Consequences of Telomere Shortening in Histologically Normal Breast Tissues

    DTIC Science & Technology

    2013-09-01

    using the open source, JAVA -based image analysis software package ImageJ (http://rsb.info.nih.gov/ij/) and a custom designed plugin (“Telometer...Tabulated data were stored in a MySQL (http://www.mysql.com) database and viewed through Microsoft Access (Microsoft Corp.). Statistical Analysis For

  20. Computer Simulation of Classic Studies in Psychology.

    ERIC Educational Resources Information Center

    Bradley, Drake R.

    This paper describes DATASIM, a comprehensive software package which generates simulated data for actual or hypothetical research designs. DATASIM is primarily intended for use in statistics and research methods courses, where it is used to generate "individualized" datasets for students to analyze, and later to correct their answers.…

  1. GWAMA: software for genome-wide association meta-analysis.

    PubMed

    Mägi, Reedik; Morris, Andrew P

    2010-05-28

    Despite the recent success of genome-wide association studies in identifying novel loci contributing effects to complex human traits, such as type 2 diabetes and obesity, much of the genetic component of variation in these phenotypes remains unexplained. One way to improving power to detect further novel loci is through meta-analysis of studies from the same population, increasing the sample size over any individual study. Although statistical software analysis packages incorporate routines for meta-analysis, they are ill equipped to meet the challenges of the scale and complexity of data generated in genome-wide association studies. We have developed flexible, open-source software for the meta-analysis of genome-wide association studies. The software incorporates a variety of error trapping facilities, and provides a range of meta-analysis summary statistics. The software is distributed with scripts that allow simple formatting of files containing the results of each association study and generate graphical summaries of genome-wide meta-analysis results. The GWAMA (Genome-Wide Association Meta-Analysis) software has been developed to perform meta-analysis of summary statistics generated from genome-wide association studies of dichotomous phenotypes or quantitative traits. Software with source files, documentation and example data files are freely available online at http://www.well.ox.ac.uk/GWAMA.

  2. Mousetrap: An integrated, open-source mouse-tracking package.

    PubMed

    Kieslich, Pascal J; Henninger, Felix

    2017-10-01

    Mouse-tracking - the analysis of mouse movements in computerized experiments - is becoming increasingly popular in the cognitive sciences. Mouse movements are taken as an indicator of commitment to or conflict between choice options during the decision process. Using mouse-tracking, researchers have gained insight into the temporal development of cognitive processes across a growing number of psychological domains. In the current article, we present software that offers easy and convenient means of recording and analyzing mouse movements in computerized laboratory experiments. In particular, we introduce and demonstrate the mousetrap plugin that adds mouse-tracking to OpenSesame, a popular general-purpose graphical experiment builder. By integrating with this existing experimental software, mousetrap allows for the creation of mouse-tracking studies through a graphical interface, without requiring programming skills. Thus, researchers can benefit from the core features of a validated software package and the many extensions available for it (e.g., the integration with auxiliary hardware such as eye-tracking, or the support of interactive experiments). In addition, the recorded data can be imported directly into the statistical programming language R using the mousetrap package, which greatly facilitates analysis. Mousetrap is cross-platform, open-source and available free of charge from https://github.com/pascalkieslich/mousetrap-os .

  3. Packaging Software Assets for Reuse

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Marshall, J. J.; Downs, R. R.

    2010-12-01

    The reuse of existing software assets such as code, architecture, libraries, and modules in current software and systems development projects can provide many benefits, including reduced costs, in time and effort, and increased reliability. Many reusable assets are currently available in various online catalogs and repositories, usually broken down by disciplines such as programming language (Ibiblio for Maven/Java developers, PyPI for Python developers, CPAN for Perl developers, etc.). The way these assets are packaged for distribution can play a role in their reuse - an asset that is packaged simply and logically is typically easier to understand, install, and use, thereby increasing its reusability. A well-packaged asset has advantages in being more reusable and thus more likely to provide benefits through its reuse. This presentation will discuss various aspects of software asset packaging and how they can affect the reusability of the assets. The characteristics of well-packaged software will be described. A software packaging domain model will be introduced, and some existing packaging approaches examined. An example case study of a Reuse Enablement System (RES), currently being created by near-term Earth science decadal survey missions, will provide information about the use of the domain model. Awareness of these factors will help software developers package their reusable assets so that they can provide the most benefits for software reuse.

  4. SNP_tools: A compact tool package for analysis and conversion of genotype data for MS-Excel

    PubMed Central

    Chen, Bowang; Wilkening, Stefan; Drechsel, Marion; Hemminki, Kari

    2009-01-01

    Background Single nucleotide polymorphism (SNP) genotyping is a major activity in biomedical research. Scientists prefer to have a facile access to the results which may require conversions between data formats. First hand SNP data is often entered in or saved in the MS-Excel format, but this software lacks genetic and epidemiological related functions. A general tool to do basic genetic and epidemiological analysis and data conversion for MS-Excel is needed. Findings The SNP_tools package is prepared as an add-in for MS-Excel. The code is written in Visual Basic for Application, embedded in the Microsoft Office package. This add-in is an easy to use tool for users with basic computer knowledge (and requirements for basic statistical analysis). Conclusion Our implementation for Microsoft Excel 2000-2007 in Microsoft Windows 2000, XP, Vista and Windows 7 beta can handle files in different formats and converts them into other formats. It is a free software. PMID:19852806

  5. SNP_tools: A compact tool package for analysis and conversion of genotype data for MS-Excel.

    PubMed

    Chen, Bowang; Wilkening, Stefan; Drechsel, Marion; Hemminki, Kari

    2009-10-23

    Single nucleotide polymorphism (SNP) genotyping is a major activity in biomedical research. Scientists prefer to have a facile access to the results which may require conversions between data formats. First hand SNP data is often entered in or saved in the MS-Excel format, but this software lacks genetic and epidemiological related functions. A general tool to do basic genetic and epidemiological analysis and data conversion for MS-Excel is needed. The SNP_tools package is prepared as an add-in for MS-Excel. The code is written in Visual Basic for Application, embedded in the Microsoft Office package. This add-in is an easy to use tool for users with basic computer knowledge (and requirements for basic statistical analysis). Our implementation for Microsoft Excel 2000-2007 in Microsoft Windows 2000, XP, Vista and Windows 7 beta can handle files in different formats and converts them into other formats. It is a free software.

  6. Around and about an application of the GAMLSS package to non-stationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Debele, S. E.; Bogdanowicz, E.; Strupczewski, W. G.

    2017-08-01

    The non-stationarity of hydrologic processes due to climate change or human activities is challenging for the researchers and practitioners. However, the practical requirements for taking into account non-stationarity as a support in decision-making procedures exceed the up-to-date development of the theory and the of software. Currently, the most popular and freely available software package that allows for non-stationary statistical analysis is the GAMLSS (generalized additive models for location, scale and shape) package. GAMLSS has been used in a variety of fields. There are also several papers recommending GAMLSS in hydrological problems; however, there are still important issues which have not previously been discussed concerning mainly GAMLSS applicability not only for research and academic purposes, but also in a design practice. In this paper, we present a summary of our experiences in the implementation of GAMLSS to non-stationary flood frequency analysis, highlighting its advantages and pointing out weaknesses with regard to methodological and practical topics.

  7. Modern Physics Simulations

    NASA Astrophysics Data System (ADS)

    Brandt, Douglas; Hiller, John R.; Moloney, Michael J.

    1995-10-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  8. An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.

    PubMed

    Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong

    2017-06-01

    Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.

  9. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  10. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  11. Comparison of Cortical and Subcortical Measurements in Normal Older Adults across Databases and Software Packages

    PubMed Central

    Rane, Swati; Plassard, Andrew; Landman, Bennett A.; Claassen, Daniel O.; Donahue, Manus J.

    2017-01-01

    This work explores the feasibility of combining anatomical MRI data across two public repositories namely, the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Progressive Parkinson’s Markers Initiative (PPMI). We compared cortical thickness and subcortical volumes in cognitively normal older adults between datasets with distinct imaging parameters to assess if they would provide equivalent information. Three distinct datasets were identified. Major differences in data were scanner manufacturer and the use of magnetization inversion to enhance tissue contrast. Equivalent datasets, i.e., those providing similar volumetric measurements in cognitively normal controls, were identified in ADNI and PPMI. These were datasets obtained on the Siemens scanner with TI = 900 ms. Our secondary goal was to assess the agreement between subcortical volumes that are obtained with different software packages. Three subcortical measurement applications (FSL, FreeSurfer, and a recent multi-atlas approach) were compared. Our results show significant agreement in the measurements of caudate, putamen, pallidum, and hippocampus across the packages and poor agreement between measurements of accumbens and amygdala. This is likely due to their smaller size and lack of gray matter-white matter tissue contrast for accurate segmentation. This work provides a segue to combine imaging data from ADNI and PPMI to increase statistical power as well as to interrogate common mechanisms in disparate pathologies such as Alzheimer’s and Parkinson’s diseases. It lays the foundation for comparison of anatomical data acquired with disparate imaging parameters and analyzed with disparate software tools. Furthermore, our work partly explains the variability in the results of studies using different software packages. PMID:29756095

  12. Comparison of Cortical and Subcortical Measurements in Normal Older Adults across Databases and Software Packages.

    PubMed

    Rane, Swati; Plassard, Andrew; Landman, Bennett A; Claassen, Daniel O; Donahue, Manus J

    2017-01-01

    This work explores the feasibility of combining anatomical MRI data across two public repositories namely, the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Progressive Parkinson's Markers Initiative (PPMI). We compared cortical thickness and subcortical volumes in cognitively normal older adults between datasets with distinct imaging parameters to assess if they would provide equivalent information. Three distinct datasets were identified. Major differences in data were scanner manufacturer and the use of magnetization inversion to enhance tissue contrast. Equivalent datasets, i.e., those providing similar volumetric measurements in cognitively normal controls, were identified in ADNI and PPMI. These were datasets obtained on the Siemens scanner with TI = 900 ms. Our secondary goal was to assess the agreement between subcortical volumes that are obtained with different software packages. Three subcortical measurement applications (FSL, FreeSurfer, and a recent multi-atlas approach) were compared. Our results show significant agreement in the measurements of caudate, putamen, pallidum, and hippocampus across the packages and poor agreement between measurements of accumbens and amygdala. This is likely due to their smaller size and lack of gray matter-white matter tissue contrast for accurate segmentation. This work provides a segue to combine imaging data from ADNI and PPMI to increase statistical power as well as to interrogate common mechanisms in disparate pathologies such as Alzheimer's and Parkinson's diseases. It lays the foundation for comparison of anatomical data acquired with disparate imaging parameters and analyzed with disparate software tools. Furthermore, our work partly explains the variability in the results of studies using different software packages.

  13. Canputer Science and Technology: Introduction to Software Packages

    DTIC Science & Technology

    1984-04-01

    Table 5 Sources of Software Packages.20 Table 6 Reference Services Matrix . 33 Table 7 Reference Matrix.40 LIST OF FIGURES Figure 1 Document...consideration should be given to the acquisition of appropriate software packages to replace or upgrade existing services and to provide services not...Consequently, there are many companies that produce only software packages, and are committed to providing training, service , and support. These vendors

  14. GneimoSim: A Modular Internal Coordinates Molecular Dynamics Simulation Package

    PubMed Central

    Larsen, Adrien B.; Wagner, Jeffrey R.; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan

    2014-01-01

    The Generalized Newton Euler Inverse Mass Operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this paper we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. PMID:25263538

  15. Application of the AMBUR R package for spatio-temporal analysis of shoreline change: Jekyll Island, Georgia, USA

    NASA Astrophysics Data System (ADS)

    Jackson, Chester W.; Alexander, Clark R.; Bush, David M.

    2012-04-01

    The AMBUR (Analyzing Moving Boundaries Using R) package for the R software environment provides a collection of functions for assisting with analyzing and visualizing historical shoreline change. The package allows import and export of geospatial data in ESRI shapefile format, which is compatible with most commercial and open-source GIS software. The "baseline and transect" method is the primary technique used to quantify distances and rates of shoreline movement, and to detect classification changes across time. Along with the traditional "perpendicular" transect method, two new transect methods, "near" and "filtered," assist with quantifying changes along curved shorelines that are problematic for perpendicular transect methods. Output from the analyses includes data tables, graphics, and geospatial data, which are useful in rapidly assessing trends and potential errors in the dataset. A forecasting function also allows the user to estimate the future location of the shoreline and store the results in a shapefile. Other utilities and tools provided in the package assist with preparing and manipulating geospatial data, error checking, and generating supporting graphics and shapefiles. The package can be customized to perform additional statistical, graphical, and geospatial functions, and, it is capable of analyzing the movement of any boundary (e.g., shorelines, glacier terminus, fire edge, and marine and terrestrial ecozones).

  16. GneimoSim: a modular internal coordinates molecular dynamics simulation package.

    PubMed

    Larsen, Adrien B; Wagner, Jeffrey R; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan

    2014-12-05

    The generalized Newton-Euler inverse mass operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this article, we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, and Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. © 2014 Wiley Periodicals, Inc.

  17. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    PubMed

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  18. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks

    PubMed Central

    Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina

    2018-01-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699

  19. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  20. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  1. A Survey of Popular R Packages for Cluster Analysis

    ERIC Educational Resources Information Center

    Flynt, Abby; Dean, Nema

    2016-01-01

    Cluster analysis is a set of statistical methods for discovering new group/class structure when exploring data sets. This article reviews the following popular libraries/commands in the R software language for applying different types of cluster analysis: from the stats library, the kmeans, and hclust functions; the mclust library; the poLCA…

  2. A Meta-Analysis of Referential Communication Studies: A Computer Readable Literature Review.

    ERIC Educational Resources Information Center

    Dickson, W. Patrick; Moskoff, Mary

    A computer-assisted analysis of studies on referential communication (giving directions/explanations) located 66 reports involving 80 experiments, 114 referential tasks, and over 6,200 individuals. The studies were entered into a statistical software package system (SPSS) and analyzed for characteristics of the subjects and experimental designs,…

  3. New Tools for "New" History: Computers and the Teaching of Quantitative Historical Methods.

    ERIC Educational Resources Information Center

    Burton, Orville Vernon; Finnegan, Terence

    1989-01-01

    Explains the development of an instructional software package and accompanying workbook which teaches students to apply computerized statistical analysis to historical data, improving the study of social history. Concludes that the use of microcomputers and supercomputers to manipulate historical data enhances critical thinking skills and the use…

  4. Minimizing bias in biomass allometry: Model selection and log transformation of data

    Treesearch

    Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer

    2011-01-01

    Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....

  5. From "Work-and-Walk-By" to "Sherpa-at-Work"

    ERIC Educational Resources Information Center

    Drijvers, Paul

    2011-01-01

    Nowadays, many technological means are available to support teaching, such as the interactive whiteboard, class sets of laptop or netbook computers, and high speed internet access. For mathematics education there are advanced software packages for geometry, algebra, calculus, and statistics, which in many cases are available on line at no cost.…

  6. Modular Open-Source Software for Item Factor Analysis

    ERIC Educational Resources Information Center

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  7. Bayesian inference for psychology. Part II: Example applications with JASP.

    PubMed

    Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D

    2018-02-01

    Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

  8. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  9. Selection of software for mechanical engineering undergraduates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheah, C. T.; Yin, C. S.; Halim, T.

    A major problem with the undergraduate mechanical course is the limited exposure of students to software packages coupled with the long learning curve on the existing software packages. This work proposes the use of appropriate software packages for the entire mechanical engineering curriculum to ensure students get sufficient exposure real life design problems. A variety of software packages are highlighted as being suitable for undergraduate work in mechanical engineering, e.g. simultaneous non-linear equations; uncertainty analysis; 3-D modeling software with the FEA; analysis tools for the solution of problems in thermodynamics, fluid mechanics, mechanical system design, and solid mechanics.

  10. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.

    PubMed

    Desai, Trunil S; Srivastava, Shireesh

    2018-01-01

    13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.

  11. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses

    PubMed Central

    Desai, Trunil S.

    2018-01-01

    13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347

  12. csaw: a Bioconductor package for differential binding analysis of ChIP-seq data using sliding windows

    PubMed Central

    Lun, Aaron T.L.; Smyth, Gordon K.

    2016-01-01

    Chromatin immunoprecipitation with massively parallel sequencing (ChIP-seq) is widely used to identify binding sites for a target protein in the genome. An important scientific application is to identify changes in protein binding between different treatment conditions, i.e. to detect differential binding. This can reveal potential mechanisms through which changes in binding may contribute to the treatment effect. The csaw package provides a framework for the de novo detection of differentially bound genomic regions. It uses a window-based strategy to summarize read counts across the genome. It exploits existing statistical software to test for significant differences in each window. Finally, it clusters windows into regions for output and controls the false discovery rate properly over all detected regions. The csaw package can handle arbitrarily complex experimental designs involving biological replicates. It can be applied to both transcription factor and histone mark datasets, and, more generally, to any type of sequencing data measuring genomic coverage. csaw performs favorably against existing methods for de novo DB analyses on both simulated and real data. csaw is implemented as a R software package and is freely available from the open-source Bioconductor project. PMID:26578583

  13. RGG: A general GUI Framework for R scripts

    PubMed Central

    Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert

    2009-01-01

    Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356

  14. Reliability and accuracy of three imaging software packages used for 3D analysis of the upper airway on cone beam computed tomography images.

    PubMed

    Chen, Hui; van Eijnatten, Maureen; Wolff, Jan; de Lange, Jan; van der Stelt, Paul F; Lobbezoo, Frank; Aarab, Ghizlane

    2017-08-01

    The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. To assess the reliability of the software packages, 15 NewTom 5G ® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira ® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys ® (3diemme, Cantu, Italy) and OnDemand3D ® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G ® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G ® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira ® , 3Diagnosys ® , and OnDemand3D ® , and compared these measurements with the gold standard. The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All software packages underestimated the upper airway dimensions of the anthropomorphic phantom.

  15. pyLIMA : an open source microlensing software

    NASA Astrophysics Data System (ADS)

    Bachelet, Etienne

    2017-01-01

    Planetary microlensing is a unique tool to detect cold planets around low-mass stars which is approaching a watershed in discoveries as near-future missions incorporate dedicated surveys. NASA and ESA have decided to complement WFIRST-AFTA and Euclid with microlensing programs to enrich our statistics about this planetary population. Of the nany challenges in- herent in these missions, the data analysis is of primary importance, yet is often perceived as time consuming, complex and daunting barrier to participation in the field. We present the first open source modeling software to conduct a microlensing analysis. This software is written in Python and use as much as possible existing packages.

  16. A new computer code for discrete fracture network modelling

    NASA Astrophysics Data System (ADS)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  17. Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface

    NASA Astrophysics Data System (ADS)

    Brocks, S.; Bareth, G.

    2016-06-01

    Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.

  18. PHAST: Protein-like heteropolymer analysis by statistical thermodynamics

    NASA Astrophysics Data System (ADS)

    Frigori, Rafael B.

    2017-06-01

    PHAST is a software package written in standard Fortran, with MPI and CUDA extensions, able to efficiently perform parallel multicanonical Monte Carlo simulations of single or multiple heteropolymeric chains, as coarse-grained models for proteins. The outcome data can be straightforwardly analyzed within its microcanonical Statistical Thermodynamics module, which allows for computing the entropy, caloric curve, specific heat and free energies. As a case study, we investigate the aggregation of heteropolymers bioinspired on Aβ25-33 fragments and their cross-seeding with IAPP20-29 isoforms. Excellent parallel scaling is observed, even under numerically difficult first-order like phase transitions, which are properly described by the built-in fully reconfigurable force fields. Still, the package is free and open source, this shall motivate users to readily adapt it to specific purposes.

  19. Intrinsic noise analyzer: a software package for the exploration of stochastic biochemical kinetics using the system size expansion.

    PubMed

    Thomas, Philipp; Matuschek, Hannes; Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license.

  20. Intrinsic Noise Analyzer: A Software Package for the Exploration of Stochastic Biochemical Kinetics Using the System Size Expansion

    PubMed Central

    Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen’s system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA’s performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with circadian rhythms. The software iNA is freely available as executable binaries for Linux, MacOSX and Microsoft Windows, as well as the full source code under an open source license. PMID:22723865

  1. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  2. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, W. A.; Lepicovsky, J.

    1992-01-01

    The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  3. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1992-01-01

    The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  4. Faculty/Student Surveys Using Open Source Software

    ERIC Educational Resources Information Center

    Kaceli, Sali

    2004-01-01

    This session will highlight an easy survey package which lets non-technical users create surveys, administer surveys, gather results, and view statistics. This is an open source application all managed online via a web browser. By using phpESP, the faculty is given the freedom of creating various surveys at their convenience and link them to their…

  5. An R package for analyzing and modeling ranking data

    PubMed Central

    2013-01-01

    Background In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty’s and Koczkodaj’s inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Results Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians’ preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as “internal/external”), and the second dimension can be interpreted as their overall variance of (labeled as “push/pull factors”). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman’s footrule distance. Conclusions In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations. PMID:23672645

  6. An R package for analyzing and modeling ranking data.

    PubMed

    Lee, Paul H; Yu, Philip L H

    2013-05-14

    In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians' preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as "internal/external"), and the second dimension can be interpreted as their overall variance of (labeled as "push/pull factors"). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman's footrule distance. In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations.

  7. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  8. pROC: an open-source package for R and S+ to analyze and compare ROC curves.

    PubMed

    Robin, Xavier; Turck, Natacha; Hainard, Alexandre; Tiberti, Natalia; Lisacek, Frédérique; Sanchez, Jean-Charles; Müller, Markus

    2011-03-17

    Receiver operating characteristic (ROC) curves are useful tools to evaluate classifiers in biomedical and bioinformatics applications. However, conclusions are often reached through inconsistent use or insufficient statistical analysis. To support researchers in their ROC curves analysis we developed pROC, a package for R and S+ that contains a set of tools displaying, analyzing, smoothing and comparing ROC curves in a user-friendly, object-oriented and flexible interface. With data previously imported into the R or S+ environment, the pROC package builds ROC curves and includes functions for computing confidence intervals, statistical tests for comparing total or partial area under the curve or the operating points of different classifiers, and methods for smoothing ROC curves. Intermediary and final results are visualised in user-friendly interfaces. A case study based on published clinical and biomarker data shows how to perform a typical ROC analysis with pROC. pROC is a package for R and S+ specifically dedicated to ROC analysis. It proposes multiple statistical tests to compare ROC curves, and in particular partial areas under the curve, allowing proper ROC interpretation. pROC is available in two versions: in the R programming language or with a graphical user interface in the S+ statistical software. It is accessible at http://expasy.org/tools/pROC/ under the GNU General Public License. It is also distributed through the CRAN and CSAN public repositories, facilitating its installation.

  9. A comparison of six software packages for evaluation of solid lung nodules using semi-automated volumetry: what is the minimum increase in size to detect growth in repeated CT examinations.

    PubMed

    de Hoop, Bartjan; Gietema, Hester; van Ginneken, Bram; Zanen, Pieter; Groenewegen, Gerard; Prokop, Mathias

    2009-04-01

    We compared interexamination variability of CT lung nodule volumetry with six currently available semi-automated software packages to determine the minimum change needed to detect the growth of solid lung nodules. We had ethics committee approval. To simulate a follow-up examination with zero growth, we performed two low-dose unenhanced CT scans in 20 patients referred for pulmonary metastases. Between examinations, patients got off and on the table. Volumes of all pulmonary nodules were determined on both examinations using six nodule evaluation software packages. Variability (upper limit of the 95% confidence interval of the Bland-Altman plot) was calculated for nodules for which segmentation was visually rated as adequate. We evaluated 214 nodules (mean diameter 10.9 mm, range 3.3 mm-30.0 mm). Software packages provided adequate segmentation in 71% to 86% of nodules (p < 0.001). In case of adequate segmentation, variability in volumetry between scans ranged from 16.4% to 22.3% for the various software packages. Variability with five to six software packages was significantly less for nodules >or=8 mm in diameter (range 12.9%-17.1%) than for nodules <8 mm (range 18.5%-25.6%). Segmented volumes of each package were compared to each of the other packages. Systematic volume differences were detected in 11/15 comparisons. This hampers comparison of nodule volumes between software packages.

  10. Software and package applicating for network meta-analysis: A usage-based comparative study.

    PubMed

    Xu, Chang; Niu, Yuming; Wu, Junyi; Gu, Huiyun; Zhang, Chao

    2017-12-21

    To compare and analyze the characteristics and functions of software applications for network meta-analysis (NMA). PubMed, EMbase, The Cochrane Library, the official websites of Bayesian inference Using Gibbs Sampling (BUGS), Stata and R, and Google were searched to collect the software and packages for performing NMA; software and packages published up to March 2016 were included. After collecting the software, packages, and their user guides, we used the software and packages to calculate a typical example. All characteristics, functions, and computed results were compared and analyzed. Ten types of software were included, including programming and non-programming software. They were developed mainly based on Bayesian or frequentist theory. Most types of software have the characteristics of easy operation, easy mastery, exact calculation, or excellent graphing. However, there was no single software that performed accurate calculations with superior graphing; this could only be achieved through the combination of two or more types of software. This study suggests that the user should choose the appropriate software according to personal programming basis, operational habits, and financial ability. Then, the choice of the combination of BUGS and R (or Stata) software to perform the NMA is considered. © 2017 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William Eugene

    These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly:more » the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.« less

  12. Mercury⊕: An evidential reasoning image classifier

    NASA Astrophysics Data System (ADS)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  13. Performance Assessment and Translation of Physiologically Based Pharmacokinetic Models From acslX to Berkeley Madonna, MATLAB, and R Language: Oxytetracycline and Gold Nanoparticles As Case Examples.

    PubMed

    Lin, Zhoumeng; Jaberi-Douraki, Majid; He, Chunla; Jin, Shiqiang; Yang, Raymond S H; Fisher, Jeffrey W; Riviere, Jim E

    2017-07-01

    Many physiologically based pharmacokinetic (PBPK) models for environmental chemicals, drugs, and nanomaterials have been developed to aid risk and safety assessments using acslX. However, acslX has been rendered sunset since November 2015. Alternative modeling tools and tutorials are needed for future PBPK applications. This forum article aimed to: (1) demonstrate the performance of 4 PBPK modeling software packages (acslX, Berkeley Madonna, MATLAB, and R language) tested using 2 existing models (oxytetracycline and gold nanoparticles); (2) provide a tutorial of PBPK model code conversion from acslX to Berkeley Madonna, MATLAB, and R language; (3) discuss the advantages and disadvantages of each software package in the implementation of PBPK models in toxicology, and (4) share our perspective about future direction in this field. Simulation results of plasma/tissue concentrations/amounts of oxytetracycline and gold from different models were compared visually and statistically with linear regression analyses. Simulation results from the original models were correlated well with results from the recoded models, with time-concentration/amount curves nearly superimposable and determination coefficients of 0.86-1.00. Step-by-step explanations of the recoding of the models in different software programs are provided in the Supplementary Data. In summary, this article presents a tutorial of PBPK model code conversion for a small molecule and a nanoparticle among 4 software packages, and a performance comparison of these software packages in PBPK model implementation. This tutorial helps beginners learn PBPK modeling, provides suggestions for selecting a suitable tool for future projects, and may lead to the transition from acslX to alternative modeling tools. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Validation of thermal effects of LED package by using Elmer finite element simulation method

    NASA Astrophysics Data System (ADS)

    Leng, Lai Siang; Retnasamy, Vithyacharan; Mohamad Shahimin, Mukhzeer; Sauli, Zaliman; Taniselass, Steven; Bin Ab Aziz, Muhamad Hafiz; Vairavan, Rajendaran; Kirtsaeng, Supap

    2017-02-01

    The overall performance of the Light-emitting diode, LED package is critically affected by the heat attribution. In this study, open source software - Elmer FEM has been utilized to study the thermal analysis of the LED package. In order to perform a complete simulation study, both Salome software and ParaView software were introduced as Pre and Postprocessor. The thermal effect of the LED package was evaluated by this software. The result has been validated with commercially licensed software based on previous work. The percentage difference from both simulation results is less than 5% which is tolerable and comparable.

  15. NDE Software Developed at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Rauser, Richard W.; Nichols, Charles; Bonacuse, Peter J.

    2014-01-01

    NASA Glenn Research Center has developed several important Nondestructive Evaluation (NDE) related software packages for different projects in the last 10 years. Three of the software packages have been created with commercial-grade user interfaces and are available to United States entities for download on the NASA Technology Transfer and Partnership Office server (https://sr.grc.nasa.gov/). This article provides brief overviews of the software packages.

  16. Geostatistics and GIS: tools for characterizing environmental contamination.

    PubMed

    Henshaw, Shannon L; Curriero, Frank C; Shields, Timothy M; Glass, Gregory E; Strickland, Paul T; Breysse, Patrick N

    2004-08-01

    Geostatistics is a set of statistical techniques used in the analysis of georeferenced data that can be applied to environmental contamination and remediation studies. In this study, the 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (DDE) contamination at a Superfund site in western Maryland is evaluated. Concern about the site and its future clean up has triggered interest within the community because residential development surrounds the area. Spatial statistical methods, of which geostatistics is a subset, are becoming increasingly popular, in part due to the availability of geographic information system (GIS) software in a variety of application packages. In this article, the joint use of ArcGIS software and the R statistical computing environment are demonstrated as an approach for comprehensive geostatistical analyses. The spatial regression method, kriging, is used to provide predictions of DDE levels at unsampled locations both within the site and the surrounding areas where residential development is ongoing.

  17. Multiscale analysis of river networks using the R package linbin

    USGS Publications Warehouse

    Welty, Ethan Z.; Torgersen, Christian E.; Brenkman, Samuel J.; Duda, Jeffrey J.; Armstrong, Jonathan B.

    2015-01-01

    Analytical tools are needed in riverine science and management to bridge the gap between GIS and statistical packages that were not designed for the directional and dendritic structure of streams. We introduce linbin, an R package developed for the analysis of riverscapes at multiple scales. With this software, riverine data on aquatic habitat and species distribution can be scaled and plotted automatically with respect to their position in the stream network or—in the case of temporal data—their position in time. The linbin package aggregates data into bins of different sizes as specified by the user. We provide case studies illustrating the use of the software for (1) exploring patterns at different scales by aggregating variables at a range of bin sizes, (2) comparing repeat observations by aggregating surveys into bins of common coverage, and (3) tailoring analysis to data with custom bin designs. Furthermore, we demonstrate the utility of linbin for summarizing patterns throughout an entire stream network, and we analyze the diel and seasonal movements of tagged fish past a stationary receiver to illustrate how linbin can be used with temporal data. In short, linbin enables more rapid analysis of complex data sets by fisheries managers and stream ecologists and can reveal underlying spatial and temporal patterns of fish distribution and habitat throughout a riverscape.

  18. Evidential evaluation of DNA profiles using a discrete statistical model implemented in the DNA LiRa software.

    PubMed

    Puch-Solis, Roberto; Clayton, Tim

    2014-07-01

    The high sensitivity of the technology for producing profiles means that it has become routine to produce profiles from relatively small quantities of DNA. The profiles obtained from low template DNA (LTDNA) are affected by several phenomena which must be taken into consideration when interpreting and evaluating this evidence. Furthermore, many of the same phenomena affect profiles from higher amounts of DNA (e.g. where complex mixtures has been revealed). In this article we present a statistical model, which forms the basis of software DNA LiRa, and that is able to calculate likelihood ratios where one to four donors are postulated and for any number of replicates. The model can take into account dropin and allelic dropout for different contributors, template degradation and uncertain allele designations. In this statistical model unknown parameters are treated following the Empirical Bayesian paradigm. The performance of LiRa is tested using examples and the outputs are compared with those generated using two other statistical software packages likeLTD and LRmix. The concept of ban efficiency is introduced as a measure for assessing model sensitivity. Copyright © 2014. Published by Elsevier Ireland Ltd.

  19. NLM microcomputer-based tutorials (for microcomputers). Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, M.

    1990-04-01

    The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.

  20. Flexible Adaptive Paradigms for fMRI Using a Novel Software Package ‘Brain Analysis in Real-Time’ (BART)

    PubMed Central

    Hellrung, Lydia; Hollmann, Maurice; Zscheyge, Oliver; Schlumm, Torsten; Kalberlah, Christian; Roggenhofer, Elisabeth; Okon-Singer, Hadas; Villringer, Arno; Horstmann, Annette

    2015-01-01

    In this work we present a new open source software package offering a unified framework for the real-time adaptation of fMRI stimulation procedures. The software provides a straightforward setup and highly flexible approach to adapt fMRI paradigms while the experiment is running. The general framework comprises the inclusion of parameters from subject’s compliance, such as directing gaze to visually presented stimuli and physiological fluctuations, like blood pressure or pulse. Additionally, this approach yields possibilities to investigate complex scientific questions, for example the influence of EEG rhythms or fMRI signals results themselves. To prove the concept of this approach, we used our software in a usability example for an fMRI experiment where the presentation of emotional pictures was dependent on the subject’s gaze position. This can have a significant impact on the results. So far, if this is taken into account during fMRI data analysis, it is commonly done by the post-hoc removal of erroneous trials. Here, we propose an a priori adaptation of the paradigm during the experiment’s runtime. Our fMRI findings clearly show the benefits of an adapted paradigm in terms of statistical power and higher effect sizes in emotion-related brain regions. This can be of special interest for all experiments with low statistical power due to a limited number of subjects, a limited amount of time, costs or available data to analyze, as is the case with real-time fMRI. PMID:25837719

  1. A New Paradigm to Analyze Data Completeness of Patient Data.

    PubMed

    Nasir, Ayan; Gurupur, Varadraj; Liu, Xinliang

    2016-08-03

    There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data.

  2. A New Paradigm to Analyze Data Completeness of Patient Data

    PubMed Central

    Nasir, Ayan; Liu, Xinliang

    2016-01-01

    Summary Background There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Objectives Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. Methods The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. Results The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. Conclusion DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data. PMID:27484918

  3. Nonparametric Statistics Test Software Package.

    DTIC Science & Technology

    1983-09-01

    statis- tics because of their acceptance in the academic world, the availability of computer support, and flexibility in model builling. Nonparametric...25 I1l,lCELL WRITE(NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT

  4. Modeling Antimicrobial Activity of Clorox(R) Using an Agar-Diffusion Test: A New Twist On an Old Experiment.

    ERIC Educational Resources Information Center

    Mitchell, James K.; Carter, William E.

    2000-01-01

    Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…

  5. Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.

    PubMed

    Berry, K J; Mielke, P W

    2000-12-01

    Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.

  6. HDX Workbench: Software for the Analysis of H/D Exchange MS Data

    NASA Astrophysics Data System (ADS)

    Pascal, Bruce D.; Willis, Scooter; Lauer, Janelle L.; Landgraf, Rachelle R.; West, Graham M.; Marciano, David; Novick, Scott; Goswami, Devrishi; Chalmers, Michael J.; Griffin, Patrick R.

    2012-09-01

    Hydrogen/deuterium exchange mass spectrometry (HDX-MS) is an established method for the interrogation of protein conformation and dynamics. While the data analysis challenge of HDX-MS has been addressed by a number of software packages, new computational tools are needed to keep pace with the improved methods and throughput of this technique. To address these needs, we report an integrated desktop program titled HDX Workbench, which facilitates automation, management, visualization, and statistical cross-comparison of large HDX data sets. Using the software, validated data analysis can be achieved at the rate of generation. The application is available at the project home page http://hdx.florida.scripps.edu.

  7. Using R to implement spatial analysis in open source environment

    NASA Astrophysics Data System (ADS)

    Shao, Yixi; Chen, Dong; Zhao, Bo

    2007-06-01

    R is an open source (GPL) language and environment for spatial analysis, statistical computing and graphics which provides a wide variety of statistical and graphical techniques, and is highly extensible. In the Open Source environment it plays an important role in doing spatial analysis. So, to implement spatial analysis in the Open Source environment which we called the Open Source geocomputation is using the R data analysis language integrated with GRASS GIS and MySQL or PostgreSQL. This paper explains the architecture of the Open Source GIS environment and emphasizes the role R plays in the aspect of spatial analysis. Furthermore, one apt illustration of the functions of R is given in this paper through the project of constructing CZPGIS (Cheng Zhou Population GIS) supported by Changzhou Government, China. In this project we use R to implement the geostatistics in the Open Source GIS environment to evaluate the spatial correlation of land price and estimate it by Kriging Interpolation. We also use R integrated with MapServer and php to show how R and other Open Source software cooperate with each other in WebGIS environment, which represents the advantages of using R to implement spatial analysis in Open Source GIS environment. And in the end, we points out that the packages for spatial analysis in R is still scattered and the limited memory is still a bottleneck when large sum of clients connect at the same time. Therefore further work is to group the extensive packages in order or design normative packages and make R cooperate better with other commercial software such as ArcIMS. Also we look forward to developing packages for land price evaluation.

  8. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis.

    PubMed

    Neyeloff, Jeruza L; Fuchs, Sandra C; Moreira, Leila B

    2012-01-20

    Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  9. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    PubMed Central

    2012-01-01

    Background Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. Findings We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. Conclusions It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software. PMID:22264277

  10. ADFNE: Open source software for discrete fracture network engineering, two and three dimensional applications

    NASA Astrophysics Data System (ADS)

    Fadakar Alghalandis, Younes

    2017-05-01

    Rapidly growing topic, the discrete fracture network engineering (DFNE), has already attracted many talents from diverse disciplines in academia and industry around the world to challenge difficult problems related to mining, geothermal, civil, oil and gas, water and many other projects. Although, there are few commercial software capable of providing some useful functionalities fundamental for DFNE, their costs, closed code (black box) distributions and hence limited programmability and tractability encouraged us to respond to this rising demand with a new solution. This paper introduces an open source comprehensive software package for stochastic modeling of fracture networks in two- and three-dimension in discrete formulation. Functionalities included are geometric modeling (e.g., complex polygonal fracture faces, and utilizing directional statistics), simulations, characterizations (e.g., intersection, clustering and connectivity analyses) and applications (e.g., fluid flow). The package is completely written in Matlab scripting language. Significant efforts have been made to bring maximum flexibility to the functions in order to solve problems in both two- and three-dimensions in an easy and united way that is suitable for beginners, advanced and experienced users.

  11. Visual Data Analysis for Satellites

    NASA Technical Reports Server (NTRS)

    Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick

    2008-01-01

    The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.

  12. NMRbox: A Resource for Biomolecular NMR Computation.

    PubMed

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  13. [Digital administrative maps - a tool for visualization of epidemiological data].

    PubMed

    Niewiadomska, Ewa; Kowalska, Malgorzata; Czech, Elibieta; Skrzypek, Michal

    2013-01-01

    The aim of the study is to present the methods for visualization of epidemiological data using digital contour maps that take into account administrative division of Poland. The possibility of epidemiological data visualization in a geographical order, limited to the administrative level of the country, voivodeships and poviats (countics), are presented. They are crucial for the process of identifying and undertaking adequate prophylactic activities directed towards decreasing the risk and improving the population's health. This paper presents tools and techniques available in Geographic Information System ArcGIS and statistical software package R. The work includes our own data reflecting: 1) the values of specific mortality rates due to respiratory diseases, Poland, 2010, based on the Central Statistical Office data, using the R statistical software package; 2) the averaged registered incidence rates of sarcoidosis in 2006-2010 for the population aged 19+ in the Silesian voivodeship, using G(eographic Information System ArcGIS; and 3) the number of children with diagnosed respiratory diseases in the city of L.egnica in 2009, taking into account their place of residence, using layered maps in Geographic Information System ArcGIS. The tools presented and described in this paper make it possible to visualize the results of research, to increase attractiveness of courses for students, as well as to enhance the skills and competence of students and participants of courses.

  14. Statistical issues in signal extraction from microarrays

    NASA Astrophysics Data System (ADS)

    Bergemann, Tracy; Quiaoit, Filemon; Delrow, Jeffrey J.; Zhao, Lue Ping

    2001-06-01

    Microarray technologies are increasingly used in biomedical research to study genome-wide expression profiles in the post genomic era. Their popularity is largely due to their high throughput and economical affordability. For example, microarrays have been applied to studies of cell cycle, regulatory circuitry, cancer cell lines, tumor tissues, and drug discoveries. One obstacle facing the continued success of applying microarray technologies, however, is the random variaton present on microarrays: within signal spots, between spots and among chips. In addition, signals extracted by available software packages seem to vary significantly. Despite a variety of software packages, it appears that there are two major approaches to signal extraction. One approach is to focus on the identification of signal regions and hence estimation of signal levels above background levels. The other approach is to use the distribution of intensity values as a way of identifying relevant signals. Building upon both approaches, the objective of our work is to develop a method that is statistically rigorous and also efficient and robust. Statistical issues to be considered here include: (1) how to refine grid alignment so that the overall variation is minimized, (2) how to estimate the signal levels relative to the local background levels as well as the variance of this estimate, and (3) how to integrate red and green channel signals so that the ratio of interest is stable, simultaneously relaxing distributional assumptions.

  15. Novel features and enhancements in BioBin, a tool for the biologically inspired binning and association analysis of rare variants

    PubMed Central

    Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D

    2018-01-01

    Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757

  16. Closing the loop on improvement: Packaging experience in the Software Engineering Laboratory

    NASA Technical Reports Server (NTRS)

    Waligora, Sharon R.; Landis, Linda C.; Doland, Jerry T.

    1994-01-01

    As part of its award-winning software process improvement program, the Software Engineering Laboratory (SEL) has developed an effective method for packaging organizational best practices based on real project experience into useful handbooks and training courses. This paper shares the SEL's experience over the past 12 years creating and updating software process handbooks and training courses. It provides cost models and guidelines for successful experience packaging derived from SEL experience.

  17. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    PubMed

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  18. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  19. [Propensity score matching in SPSS].

    PubMed

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  20. BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.

    PubMed

    Huang, Hailiang; Tata, Sandeep; Prill, Robert J

    2013-01-01

    Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp

  1. pyam: Python Implementation of YaM

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    pyam is a software development framework with tools for facilitating the rapid development of software in a concurrent software development environment. pyam provides solutions for development challenges associated with software reuse, managing multiple software configurations, developing software product lines, and multiple platform development and build management. pyam uses release-early, release-often development cycles to allow developers to integrate their changes incrementally into the system on a continual basis. It facilitates the creation and merging of branches to support the isolated development of immature software to avoid impacting the stability of the development effort. It uses modules and packages to organize and share software across multiple software products, and uses the concepts of link and work modules to reduce sandbox setup times even when the code-base is large. One sidebenefit is the enforcement of a strong module-level encapsulation of a module s functionality and interface. This increases design transparency, system stability, and software reuse. pyam is written in Python and is organized as a set of utilities on top of the open source SVN software version control package. All development software is organized into a collection of modules. pyam packages are defined as sub-collections of the available modules. Developers can set up private sandboxes for module/package development. All module/package development takes place on private SVN branches. High-level pyam commands support the setup, update, and release of modules and packages. Released and pre-built versions of modules are available to developers. Developers can tailor the source/link module mix for their sandboxes so that new sandboxes (even large ones) can be built up easily and quickly by pointing to pre-existing module releases. All inter-module interfaces are publicly exported via links. A minimal, but uniform, convention is used for building modules.

  2. Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research

    DTIC Science & Technology

    2011-01-01

    open-source BMI software solu- tions are currently available, we feel that the Craniux software package fills a specific need in the realm of BMI...data, such as cortical source imaging using EEG or MEG recordings. It is with these characteristics in mind that we feel the Craniux software package...S. Adee, “Dean Kamen’s ‘luke arm’ prosthesis readies for clinical trials,” IEEE Spectrum, February 2008, http://spectrum .ieee.org/biomedical

  3. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  4. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  5. Genome-wide regression and prediction with the BGLR statistical package.

    PubMed

    Pérez, Paulino; de los Campos, Gustavo

    2014-10-01

    Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.

  6. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen

    2015-11-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.

  7. Browndye: A Software Package for Brownian Dynamics

    PubMed Central

    McCammon, J. Andrew

    2010-01-01

    A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. PMID:21132109

  8. Development of a software package for solid-angle calculations using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai

    2014-02-01

    Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.

  9. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chukbar, B. K., E-mail: bchukbar@mail.ru

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  10. Can I Trust This Software Package? An Exercise in Validation of Computational Results

    ERIC Educational Resources Information Center

    Shacham, Mordechai; Brauner, Neima; Ashurst, W. Robert; Cutlip, Michael B.

    2008-01-01

    Mathematical software packages such as Polymath, MATLAB, and Mathcad are currently widely used for engineering problem solving. Applications of several of these packages to typical chemical engineering problems have been demonstrated by Cutlip, et al. The main characteristic of these packages is that they provide a "problem-solving environment…

  11. HDBStat!: a platform-independent software suite for statistical analysis of high dimensional biology data.

    PubMed

    Trivedi, Prinal; Edwards, Jode W; Wang, Jelai; Gadbury, Gary L; Srinivasasainagendra, Vinodh; Zakharkin, Stanislav O; Kim, Kyoungmi; Mehta, Tapan; Brand, Jacob P L; Patki, Amit; Page, Grier P; Allison, David B

    2005-04-06

    Many efforts in microarray data analysis are focused on providing tools and methods for the qualitative analysis of microarray data. HDBStat! (High-Dimensional Biology-Statistics) is a software package designed for analysis of high dimensional biology data such as microarray data. It was initially developed for the analysis of microarray gene expression data, but it can also be used for some applications in proteomics and other aspects of genomics. HDBStat! provides statisticians and biologists a flexible and easy-to-use interface to analyze complex microarray data using a variety of methods for data preprocessing, quality control analysis and hypothesis testing. Results generated from data preprocessing methods, quality control analysis and hypothesis testing methods are output in the form of Excel CSV tables, graphs and an Html report summarizing data analysis. HDBStat! is a platform-independent software that is freely available to academic institutions and non-profit organizations. It can be downloaded from our website http://www.soph.uab.edu/ssg_content.asp?id=1164.

  12. International Inventory of Software Packages in the Information Field.

    ERIC Educational Resources Information Center

    Keren, Carl, Ed.; Sered, Irina, Ed.

    Designed to provide guidance in selecting appropriate software for library automation, information storage and retrieval, or management of bibliographic databases, this inventory describes 188 computer software packages. The information was obtained through a questionnaire survey of 600 software suppliers and developers who were asked to describe…

  13. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  14. Glioblastoma Segmentation: Comparison of Three Different Software Packages.

    PubMed

    Fyllingen, Even Hovig; Stensjøen, Anne Line; Berntsen, Erik Magnus; Solheim, Ole; Reinertsen, Ingerid

    2016-01-01

    To facilitate a more widespread use of volumetric tumor segmentation in clinical studies, there is an urgent need for reliable, user-friendly segmentation software. The aim of this study was therefore to compare three different software packages for semi-automatic brain tumor segmentation of glioblastoma; namely BrainVoyagerTM QX, ITK-Snap and 3D Slicer, and to make data available for future reference. Pre-operative, contrast enhanced T1-weighted 1.5 or 3 Tesla Magnetic Resonance Imaging (MRI) scans were obtained in 20 consecutive patients who underwent surgery for glioblastoma. MRI scans were segmented twice in each software package by two investigators. Intra-rater, inter-rater and between-software agreement was compared by using differences of means with 95% limits of agreement (LoA), Dice's similarity coefficients (DSC) and Hausdorff distance (HD). Time expenditure of segmentations was measured using a stopwatch. Eighteen tumors were included in the analyses. Inter-rater agreement was highest for BrainVoyager with difference of means of 0.19 mL and 95% LoA from -2.42 mL to 2.81 mL. Between-software agreement and 95% LoA were very similar for the different software packages. Intra-rater, inter-rater and between-software DSC were ≥ 0.93 in all analyses. Time expenditure was approximately 41 min per segmentation in BrainVoyager, and 18 min per segmentation in both 3D Slicer and ITK-Snap. Our main findings were that there is a high agreement within and between the software packages in terms of small intra-rater, inter-rater and between-software differences of means and high Dice's similarity coefficients. Time expenditure was highest for BrainVoyager, but all software packages were relatively time-consuming, which may limit usability in an everyday clinical setting.

  15. SEDPAK—A comprehensive operational system and data-processing package in APPLESOFT BASIC for a settling tube, sediment analyzer

    NASA Astrophysics Data System (ADS)

    Goldbery, R.; Tehori, O.

    SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.

  16. Evaluation of copy number variation detection for a SNP array platform

    PubMed Central

    2014-01-01

    Background Copy Number Variations (CNVs) are usually inferred from Single Nucleotide Polymorphism (SNP) arrays by use of some software packages based on given algorithms. However, there is no clear understanding of the performance of these software packages; it is therefore difficult to select one or several software packages for CNV detection based on the SNP array platform. We selected four publicly available software packages designed for CNV calling from an Affymetrix SNP array, including Birdsuite, dChip, Genotyping Console (GTC) and PennCNV. The publicly available dataset generated by Array-based Comparative Genomic Hybridization (CGH), with a resolution of 24 million probes per sample, was considered to be the “gold standard”. Compared with the CGH-based dataset, the success rate, average stability rate, sensitivity, consistence and reproducibility of these four software packages were assessed compared with the “gold standard”. Specially, we also compared the efficiency of detecting CNVs simultaneously by two, three and all of the software packages with that by a single software package. Results Simply from the quantity of the detected CNVs, Birdsuite detected the most while GTC detected the least. We found that Birdsuite and dChip had obvious detecting bias. And GTC seemed to be inferior because of the least amount of CNVs it detected. Thereafter we investigated the detection consistency produced by one certain software package and the rest three software suits. We found that the consistency of dChip was the lowest while GTC was the highest. Compared with the CNVs detecting result of CGH, in the matching group, GTC called the most matching CNVs, PennCNV-Affy ranked second. In the non-overlapping group, GTC called the least CNVs. With regards to the reproducibility of CNV calling, larger CNVs were usually replicated better. PennCNV-Affy shows the best consistency while Birdsuite shows the poorest. Conclusion We found that PennCNV outperformed the other three packages in the sensitivity and specificity of CNV calling. Obviously, each calling method had its own limitations and advantages for different data analysis. Therefore, the optimized calling methods might be identified using multiple algorithms to evaluate the concordance and discordance of SNP array-based CNV calling. PMID:24555668

  17. Features of free software packages in flow cytometry: a comparison between four non-commercial software sources.

    PubMed

    Sahraneshin Samani, Fazel; Moore, Jodene K; Khosravani, Pardis; Ebrahimi, Marzieh

    2014-08-01

    Flow cytometers designed to analyze large particles are enabling new applications in biology. Data analysis is a critical component of the process FCM. In this article we compare features of four free software packages including WinMDI, Cyflogic, Flowing software, and Cytobank.

  18. Development of a Nevada Statewide Database for Safety Analyst Software

    DOT National Transportation Integrated Search

    2017-02-02

    Safety Analyst is a software package developed by the Federal Highway Administration (FHWA) and twenty-seven participating state and local agencies including the Nevada Department of Transportation (NDOT). The software package implemented many of the...

  19. Computer Aided Drafting Packages for Secondary Education. Edition 2. PC DOS Compatible Programs. A MicroSIFT Quarterly Report.

    ERIC Educational Resources Information Center

    Pollard, Jim

    This report reviews eight IBM-compatible software packages that are available to secondary schools to teach computer-aided drafting (CAD). Software packages to be considered were selected following reviews of CAD periodicals, computers in education periodicals, advertisements, and recommendations of teachers. The packages were then rated by…

  20. Evaluation of sequence alignments and oligonucleotide probes with respect to three-dimensional structure of ribosomal RNA using ARB software package

    PubMed Central

    Kumar, Yadhu; Westram, Ralf; Kipfer, Peter; Meier, Harald; Ludwig, Wolfgang

    2006-01-01

    Background Availability of high-resolution RNA crystal structures for the 30S and 50S ribosomal subunits and the subsequent validation of comparative secondary structure models have prompted the biologists to use three-dimensional structure of ribosomal RNA (rRNA) for evaluating sequence alignments of rRNA genes. Furthermore, the secondary and tertiary structural features of rRNA are highly useful and successfully employed in designing rRNA targeted oligonucleotide probes intended for in situ hybridization experiments. RNA3D, a program to combine sequence alignment information with three-dimensional structure of rRNA was developed. Integration into ARB software package, which is used extensively by the scientific community for phylogenetic analysis and molecular probe designing, has substantially extended the functionality of ARB software suite with 3D environment. Results Three-dimensional structure of rRNA is visualized in OpenGL 3D environment with the abilities to change the display and overlay information onto the molecule, dynamically. Phylogenetic information derived from the multiple sequence alignments can be overlaid onto the molecule structure in a real time. Superimposition of both statistical and non-statistical sequence associated information onto the rRNA 3D structure can be done using customizable color scheme, which is also applied to a textual sequence alignment for reference. Oligonucleotide probes designed by ARB probe design tools can be mapped onto the 3D structure along with the probe accessibility models for evaluation with respect to secondary and tertiary structural conformations of rRNA. Conclusion Visualization of three-dimensional structure of rRNA in an intuitive display provides the biologists with the greater possibilities to carry out structure based phylogenetic analysis. Coupled with secondary structure models of rRNA, RNA3D program aids in validating the sequence alignments of rRNA genes and evaluating probe target sites. Superimposition of the information derived from the multiple sequence alignment onto the molecule dynamically allows the researchers to observe any sequence inherited characteristics (phylogenetic information) in real-time environment. The extended ARB software package is made freely available for the scientific community via . PMID:16672074

  1. QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.

    PubMed

    Jung, Sang-Kyu; Aleman-Meza, Boanerges; Riepe, Celeste; Zhong, Weiwei

    2014-01-01

    Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.

  2. Differential maneuvering simulator data reduction and analysis software

    NASA Technical Reports Server (NTRS)

    Beasley, G. P.; Sigman, R. S.

    1972-01-01

    A multielement data reduction and analysis software package has been developed for use with the Langley differential maneuvering simulator (DMS). This package, which has several independent elements, was developed to support all phases of DMS aircraft simulation studies with a variety of both graphical and tabular information. The overall software package is considered unique because of the number, diversity, and sophistication of the element programs available for use in a single study. The purpose of this paper is to discuss the overall DMS data reduction and analysis package by reviewing the development of the various elements of the software, showing typical results that can be obtained, and discussing how each element can be used.

  3. Problems Associated with Statistical Pattern Recognition of Acoustic Emission Signals in a Compact Tension Fatigue Specimen

    NASA Technical Reports Server (NTRS)

    Hinton, Yolanda L.

    1999-01-01

    Acoustic emission (AE) data were acquired during fatigue testing of an aluminum 2024-T4 compact tension specimen using a commercially available AE system. AE signals from crack extension were identified and separated from noise spikes, signals that reflected from the specimen edges, and signals that saturated the instrumentation. A commercially available software package was used to train a statistical pattern recognition system to classify the signals. The software trained a network to recognize signals with a 91-percent accuracy when compared with the researcher's interpretation of the data. Reasons for the discrepancies are examined and it is postulated that additional preprocessing of the AE data to focus on the extensional wave mode and eliminate other effects before training the pattern recognition system will result in increased accuracy.

  4. Large Scale Software Building with CMake in ATLAS

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  5. Western aeronautical test range real-time graphics software package MAGIC

    NASA Technical Reports Server (NTRS)

    Malone, Jacqueline C.; Moore, Archie L.

    1988-01-01

    The master graphics interactive console (MAGIC) software package used on the Western Aeronautical Test Range (WATR) of the NASA Ames Research Center is described. MAGIC is a resident real-time research tool available to flight researchers-scientists in the NASA mission control centers of the WATR at the Dryden Flight Research Facility at Edwards, California. The hardware configuration and capabilities of the real-time software package are also discussed.

  6. CheMentor Software System by H. A. Peoples

    NASA Astrophysics Data System (ADS)

    Reid, Brian P.

    1997-09-01

    CheMentor Software System H. A. Peoples. Computerized Learning Enhancements: http://www.ecis.com/~clehap; email: clehap@ecis.com; 1996 - 1997. CheMentor is a series of software packages for introductory-level chemistry, which includes Practice Items (I), Stoichiometry (I), Calculating Chemical Formulae, and the CheMentor Toolkit. The first three packages provide practice problems for students and various types of help to solve them; the Toolkit includes "calculators" for determining chemical quantities as well as the Practice Items (I) set of problems. The set of software packages is designed so that each individual product acts as a module of a common CheMentor program. As the name CheMentor implies, the software is designed as a "mentor" for students learning introductory chemistry concepts and problems. The typical use of the software would be by individual students (or perhaps small groups) as an adjunct to lectures. CheMentor is a HyperCard application and the modules are HyperCard stacks. The requirements to run the packages include a Macintosh computer with at least 1 MB of RAM, a hard drive with several MB of available space depending upon the packages selected (10 MB were required for all the packages reviewed here), and the Mac operating system 6.0.5 or later.

  7. Multivariate Statistical Analysis Software Technologies for Astrophysical Research Involving Large Data Bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. G.

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has produced real, published results.

  8. DCL System Research Using Advanced Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    recognition. Algorithm design and statistical analysis and feature analysis. Post -Doctoral Associate, Cornell University, Bioacoustics Research...short. The HPC-ADA was designed based on fielded systems [1-4, 6] that offer a variety of desirable attributes, specifically dynamic resource...The software package was designed to utilize parallel and distributed processing for running recognition and other advanced algorithms. DeLMA

  9. Data Preparation 101: How to Use Query-by-Example to Get Your Research Dataset Ready for Primetime

    ERIC Educational Resources Information Center

    Lazarony, Paul J.; Driscoll, Donna A.

    2011-01-01

    Researchers are often distressed to discover that the data they wanted to use in their landmark study is not configured in a way that is usable by a Statistical Analysis Software Package (SASP). For example, the data needed may come from two or more sources and it may not be clear to the researcher how to get them combined into one analyzable…

  10. Playing with Plug-ins

    ERIC Educational Resources Information Center

    Thompson, Douglas E.

    2013-01-01

    In today's complex music software packages, many features can remain unexplored and unused. Software plug-ins--available in most every music software package, yet easily overlooked in the software's basic operations--are one such feature. In this article, I introduce readers to plug-ins and offer tips for purchasing plug-ins I have…

  11. The effect of electronic package on satisfaction in multiple sclerosis patients

    PubMed Central

    Mohamadirizi, Shahla; Shaygannejad, Vahid; Mohamadirizi, Soheila; Khosrawi, Saeid

    2016-01-01

    Background: Traditional teaching methods used in medical education cannot wholly respond to the rapid changes and growth of information as well as continuous changes in the educational needs of society, especially patients with chronic diseases such as multiple sclerosis (MS). Therefore, this study was designed with the aim of examining the effect of electronic package on satisfaction in MS patients. Materials and Methods: The research was a quasi-experimental study. It was carried out at the MS Kashani Center affiliated to the Isfahan University of Medical Sciences, in 2013. One hundred twenty-eight patients with MS were allocated randomly into two equal groups of 64 each for education by booklet (control) and education by multimedia software (experimental) for 2 weeks. Data were collected by processing questionnaires, which consisted of questions about satisfaction (17 items) and questions about demographic and disease characteristics (9 items), answered by both groups before and 2 weeks after education. SPSS version 14 (DARYA software, Iran) was used to conduct statistical tests such as the independent t-test and the paired t-test for analyzing the data. The statistical significance level was less than 0.05. Results: The results show that there was not any significant difference between the satisfaction scores of the electronic package and control groups before intervention, but that there was a significant difference after 2 weeks’ intervention (P = 0.010). Conclusion: The electronic programs comprised an attractive education method. So this technology can increase motivation in MS patients to study more about the disease process. PMID:27308261

  12. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging.

    PubMed

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M

    2008-01-01

    To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.

  13. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging

    PubMed Central

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM

    2009-01-01

    Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582

  14. Identifying biologically relevant differences between metagenomic communities.

    PubMed

    Parks, Donovan H; Beiko, Robert G

    2010-03-15

    Metagenomics is the study of genetic material recovered directly from environmental samples. Taxonomic and functional differences between metagenomic samples can highlight the influence of ecological factors on patterns of microbial life in a wide range of habitats. Statistical hypothesis tests can help us distinguish ecological influences from sampling artifacts, but knowledge of only the P-value from a statistical hypothesis test is insufficient to make inferences about biological relevance. Current reporting practices for pairwise comparative metagenomics are inadequate, and better tools are needed for comparative metagenomic analysis. We have developed a new software package, STAMP, for comparative metagenomics that supports best practices in analysis and reporting. Examination of a pair of iron mine metagenomes demonstrates that deeper biological insights can be gained using statistical techniques available in our software. An analysis of the functional potential of 'Candidatus Accumulibacter phosphatis' in two enhanced biological phosphorus removal metagenomes identified several subsystems that differ between the A.phosphatis stains in these related communities, including phosphate metabolism, secretion and metal transport. Python source code and binaries are freely available from our website at http://kiwi.cs.dal.ca/Software/STAMP CONTACT: beiko@cs.dal.ca Supplementary data are available at Bioinformatics online.

  15. Introduction to Software Packages. [Final Report.

    ERIC Educational Resources Information Center

    Frankel, Sheila, Ed.; And Others

    This document provides an introduction to applications computer software packages that support functional managers in government and encourages the use of such packages as an alternative to in-house development. A review of current application areas includes budget/project management, financial management/accounting, payroll, personnel,…

  16. Comparison of software packages for detecting differential expression in RNA-seq studies

    PubMed Central

    Seyednasrollah, Fatemeh; Laiho, Asta

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. PMID:24300110

  17. Comparison of software packages for detecting differential expression in RNA-seq studies.

    PubMed

    Seyednasrollah, Fatemeh; Laiho, Asta; Elo, Laura L

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. © The Author 2013. Published by Oxford University Press.

  18. Approximated adjusted fractional Bayes factors: A general method for testing informative hypotheses.

    PubMed

    Gu, Xin; Mulder, Joris; Hoijtink, Herbert

    2018-05-01

    Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers' theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure. © 2017 The British Psychological Society.

  19. GENERAL PURPOSE ADA PACKAGES

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package requires 205K of main memory on a DEC VAX running VMS. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  20. Regional tectonic evaluation of the Tuscan Apenine, vulcanism, thermal anomalies and the relation to structural units

    NASA Technical Reports Server (NTRS)

    Bodechtel, J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The geological interpretation on data exhibiting the Italian peninsula led to the recognition of tectonic features which are explained by a clockwise rotation of various blocks along left-handed transform faults. These faults can be interpreted as resulting from shear due to main stress directed north-eastwards. A land use map of the mountainous regions of Italy was produced on a scale of 1:250,000. For the digital treatment of MSS-CCTs an image processing software was written in FORTRAN 4. The software package includes descriptive statistics and also classification algorithms.

  1. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  2. RSEIS and RFOC: Seismic Analysis in R

    NASA Astrophysics Data System (ADS)

    Lees, J. M.

    2015-12-01

    Open software is essential for reproducible scientific exchange. R-packages provide a platform for development of seismological investigation software that can be properly documented and traced for data processing. A suite of R packages designed for a wide range of seismic analysis is currently available in the free software platform called R. R is a software platform based on the S-language developed at Bell Labs decades ago. Routines in R can be run as standalone function calls, or developed in object-oriented mode. R comes with a base set of routines, and thousands of user developed packages. The packages developed at UNC include subroutines and interactive codes for processing seismic data, analyzing geographic information (GIS) and inverting data involved in a variety of geophysical applications. On CRAN (Comprehensive R Archive Network, http://www.r-project.org/) currently available packages related to seismic analysis are RSEIS, Rquake, GEOmap, RFOC, zoeppritz, RTOMO, and geophys, Rwave, PEIP, hht, rFDSN. These include signal processing, data management, mapping, earthquake location, deconvolution, focal mechanisms, wavelet transforms, Hilbert-Huang Transforms, tomographic inversion, and Mogi deformation among other useful functionality. All software in R packages is required to have detailed documentation, making the exchange and modification of existing software easy. In this presentation, I will focus on packages RSEIS and RFOC, showing examples from a variety of seismic analyses. The R approach has similarities to the popular (and expensive) MATLAB platform, although R is open source and free to down load.

  3. Laboratory Connections: Review of Two Commercial Interfacing Packages.

    ERIC Educational Resources Information Center

    Powers, Michael H.

    1989-01-01

    Evaluates two Apple II interfacing packages designed to measure pH: (1) "Experiments in Chemistry" by HRM Software and (2) "Voltage Plotter III" by Vernier Software. Provides characteristics and screen dumps of each package. Reports both systems are suitable for high school or beginning college laboratories. (MVL)

  4. MicroSIFT Courseware Evaluations [Set 15 (362-388) and Set 16 (389-441), with an Index Listing the Contents of Each Set (Sets 1-16) and a Cumulative Subject Index (Sets 1-16)].

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This document consists of 80 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. Set 15 consists of 27 packages; set 16 consists of 53 packages. Each software review lists producer, time and place of evaluation,…

  5. Tolerancing aspheres based on manufacturing statistics

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Möhl, A.; Fuchs, U.

    2017-11-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  6. Basic analysis of reflectometry data software package for the analysis of multilayered structures according to reflectometry data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Astaf'ev, S. B., E-mail: bard@ns.crys.ras.ru; Shchedrin, B. M.; Yanusova, L. G.

    2012-01-15

    The main principles of developing the Basic Analysis of Reflectometry Data (BARD) software package, which is aimed at obtaining a unified (standardized) tool for analyzing the structure of thin multilayer films and nanostructures of different nature based on reflectometry data, are considered. This software package contains both traditionally used procedures for processing reflectometry data and the authors' original developments on the basis of new methods for carrying out and analyzing reflectometry experiments. The structure of the package, its functional possibilities, examples of application, and prospects of development are reviewed.

  7. Color-coded automated signal intensity curves for detection and characterization of breast lesions: preliminary evaluation of a new software package for integrated magnetic resonance-based breast imaging.

    PubMed

    Pediconi, Federica; Catalano, Carlo; Venditti, Fiammetta; Ercolani, Mauro; Carotenuto, Luigi; Padula, Simona; Moriconi, Enrica; Roselli, Antonella; Giacomelli, Laura; Kirchin, Miles A; Passariello, Roberto

    2005-07-01

    The objective of this study was to evaluate the value of a color-coded automated signal intensity curve software package for contrast-enhanced magnetic resonance mammography (CE-MRM) in patients with suspected breast cancer. Thirty-six women with suspected breast cancer based on mammographic and sonographic examinations were preoperatively evaluated on CE-MRM. CE-MRM was performed on a 1.5-T magnet using a 2D Flash dynamic T1-weighted sequence. A dosage of 0.1 mmol/kg of Gd-BOPTA was administered at a flow rate of 2 mL/s followed by 10 mL of saline. Images were analyzed with the new software package and separately with a standard display method. Statistical comparison was performed of the confidence for lesion detection and characterization with the 2 methods and of the diagnostic accuracy for characterization compared with histopathologic findings. At pathology, 54 malignant lesions and 14 benign lesions were evaluated. All 68 (100%) lesions were detected with both methods and good correlation with histopathologic specimens was obtained. Confidence for both detection and characterization was significantly (P < or = 0.025) better with the color-coded method, although no difference (P > 0.05) between the methods was noted in terms of the sensitivity, specificity, and overall accuracy for lesion characterization. Excellent agreement between the 2 methods was noted for both the determination of lesion size (kappa = 0.77) and determination of SI/T curves (kappa = 0.85). The novel color-coded signal intensity curve software allows lesions to be visualized as false color maps that correspond to conventional signal intensity time curves. Detection and characterization of breast lesions with this method is quick and easily interpretable.

  8. A Comparison of Authoring Software for Developing Mathematics Self-Learning Software Packages.

    ERIC Educational Resources Information Center

    Suen, Che-yin; Pok, Yang-ming

    Four years ago, the authors started to develop a self-paced mathematics learning software called NPMaths by using an authoring package called Tencore. However, NPMaths had some weak points. A development team was hence formed to develop similar software called Mathematics On Line. This time the team used another development language called…

  9. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  10. Technology Assessment Software Package: Final Report.

    ERIC Educational Resources Information Center

    Hutinger, Patricia L.

    This final report describes the Technology Assessment Software Package (TASP) Project, which produced developmentally appropriate technology assessment software for children from 18 months through 8 years of age who have moderate to severe disabilities that interfere with their interaction with people, objects, tasks, and events in their…

  11. The Hidden Cost of Buying a Computer.

    ERIC Educational Resources Information Center

    Johnson, Michael

    1983-01-01

    In order to process data in a computer, application software must be either developed or purchased. Costs for modifications of the software package and maintenance are often hidden. The decision to buy or develop software packages should be based upon factors of time and maintenance. (MLF)

  12. Software for Managing Personal Files.

    ERIC Educational Resources Information Center

    Lundeen, Gerald

    1989-01-01

    Discusses the special characteristics of personal file management software and compares four microcomputer software packages: Notebook II with Bibliography and Convert, Pro-Cite with Biblio-Links, askSam, and Reference Manager. Each package is evaluated in terms of the user interface, file maintenance, retrieval capabilities, output, and…

  13. Software design for analysis of multichannel intracardial and body surface electrocardiograms.

    PubMed

    Potse, Mark; Linnenbank, André C; Grimbergen, Cornelis A

    2002-11-01

    Analysis of multichannel ECG recordings (body surface maps (BSMs) and intracardial maps) requires special software. We created a software package and a user interface on top of a commercial data analysis package (MATLAB) by a combination of high-level and low-level programming. Our software was created to satisfy the needs of a diverse group of researchers. It can handle a large variety of recording configurations. It allows for interactive usage through a fast and robust user interface, and batch processing for the analysis of large amounts of data. The package is user-extensible, includes routines for both common and experimental data processing tasks, and works on several computer platforms. The source code is made intelligible using software for structured documentation and is available to the users. The package is currently used by more than ten research groups analysing ECG data worldwide.

  14. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  15. 1986 Petroleum Software Directory. [800 mini, micro and mainframe computer software packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    Pennwell's 1986 Petroleum Software Directory is a complete listing of software created specifically for the petroleum industry. Details are provided on over 800 mini, micro and mainframe computer software packages from more than 250 different companies. An accountant can locate programs to automate bookkeeping functions in large oil and gas production firms. A pipeline engineer will find programs designed to calculate line flow and wellbore pressure drop.

  16. Diagnostic evaluation of three cardiac software packages using a consecutive group of patients

    PubMed Central

    2011-01-01

    Purpose The aim of this study was to compare the diagnostic performance of the three software packages 4DMSPECT (4DM), Emory Cardiac Toolbox (ECTb), and Cedars Quantitative Perfusion SPECT (QPS) for quantification of myocardial perfusion scintigram (MPS) using a large group of consecutive patients. Methods We studied 1,052 consecutive patients who underwent 2-day stress/rest 99mTc-sestamibi MPS studies. The reference/gold-standard classifications for the MPS studies were obtained from three physicians, with more than 25 years each of experience in nuclear cardiology, who re-evaluated all MPS images. Automatic processing was carried out using 4DM, ECTb, and QPS software packages. Total stress defect extent (TDE) and summed stress score (SSS) based on a 17-segment model were obtained from the software packages. Receiver-operating characteristic (ROC) analysis was performed. Results A total of 734 patients were classified as normal and the remaining 318 were classified as having infarction and/or ischemia. The performance of the software packages calculated as the area under the SSS ROC curve were 0.87 for 4DM, 0.80 for QPS, and 0.76 for ECTb (QPS vs. ECTb p = 0.03; other differences p < 0.0001). The area under the TDE ROC curve were 0.87 for 4DM, 0.82 for QPS, and 0.76 for ECTb (QPS vs. ECTb p = 0.0005; other differences p < 0.0001). Conclusion There are considerable differences in performance between the three software packages with 4DM showing the best performance and ECTb the worst. These differences in performance should be taken in consideration when software packages are used in clinical routine or in clinical studies. PMID:22214226

  17. A Study of Visualization for Mathematics Education

    NASA Technical Reports Server (NTRS)

    Daugherty, Sarah C.

    2008-01-01

    Graphical representations such as figures, illustrations, and diagrams play a critical role in mathematics and they are equally important in mathematics education. However, graphical representations in mathematics textbooks are static, Le. they are used to illustrate only a specific example or a limited set. of examples. By using computer software to visualize mathematical principles, virtually there is no limit to the number of specific cases and examples that can be demonstrated. However, we have not seen widespread adoption of visualization software in mathematics education. There are currently a number of software packages that provide visualization of mathematics for research and also software packages specifically developed for mathematics education. We conducted a survey of mathematics visualization software packages, summarized their features and user bases, and analyzed their limitations. In this survey, we focused on evaluating the software packages for their use with mathematical subjects adopted by institutions of secondary education in the United States (middle schools and high schools), including algebra, geometry, trigonometry, and calculus. We found that cost, complexity, and lack of flexibility are the major factors that hinder the widespread use of mathematics visualization software in education.

  18. Dill: an algorithm and a symbolic software package for doing classical supersymmetry calculations

    NASA Astrophysics Data System (ADS)

    Luc̆ić, Vladan

    1995-11-01

    An algorithm is presented that formalizes different steps in a classical Supersymmetric (SUSY) calculation. Based on the algorithm Dill, a symbolic software package, that can perform the calculations, is developed in the Mathematica programming language. While the algorithm is quite general, the package is created for the 4 - D, N = 1 model. Nevertheless, with little modification, the package could be used for other SUSY models. The package has been tested and some of the results are presented.

  19. An Ada Linear-Algebra Software Package Modeled After HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.; Lawson, Charles L.

    1990-01-01

    New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.

  20. NDAS Hardware Translation Layer Development

    NASA Technical Reports Server (NTRS)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  1. Academic Web Authoring Mulitmedia Development and Course Management Tools

    ERIC Educational Resources Information Center

    Halloran, Margaret E.

    2005-01-01

    Course management software enables faculty members to learn one software package for web-based curriculum, assessment, synchronous and asynchronous discussions, collaborative work, multimedia and interactive resource development. There are as many as 109 different course management software packages on the market and several studies have evaluated…

  2. Improving Maintenance Data Collection Via Point-of- Maintenance (POMX) Implementation

    DTIC Science & Technology

    2006-03-01

    accurate documentation, (3) identifying and correcting the root causes for poor data integrity, and (4) educating the unit on the critical need for data ...the validity of the results. The data in this study were analyzed using the SAS JMP 6.0 statistical software package. The results for the tests...traditional keyboard data entry methods at a computer terminal. These terminals are typically located in the aircraft maintenance unit (AMU) facility , away

  3. Improving Maintenance Data Collection Via Point-Of-Maintenance (POMX) Implementation

    DTIC Science & Technology

    2006-03-01

    accurate documentation, (3) identifying and correcting the root causes for poor data integrity, and (4) educating the unit on the critical need for data ...the validity of the results. The data in this study were analyzed using the SAS JMP 6.0 statistical software package. The results for the tests...traditional keyboard data entry methods at a computer terminal. These terminals are typically located in the aircraft maintenance unit (AMU) facility , away

  4. Quantifying the Relationship between AMC Resources and U.S. Army Materiel Readiness

    DTIC Science & Technology

    1989-08-25

    Resource Management report 984 for the same period. Insufficient data precluded analysis of the OMA PEs Total Package Fielding and Life Cycle Software...procurement, had the greatest failure rates when subjected to the statistical tests merely because of the reduced number of data pairs. Analyses of...ENGINEERING DEVELOPMENT 6.5 - MANAGEMENT AND SUPPORT 6.7 - OPERATIONAL SYSTEM DEVELOPMENT P2 - GENERAL PURPOSE FORCES P3 - INTELIGENCE AND COMMUNICATIONS P7

  5. A Study of the NASS-CDS System for Injury/Fatality Rates of Occupants in Various Restraints and A Discussion of Alternative Presentation Methods

    PubMed Central

    Stucki, Sheldon Lee; Biss, David J.

    2000-01-01

    An analysis was performed using the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) database to compare the injury/fatality rates of variously restrained driver occupants as compared to unrestrained driver occupants in the total database of drivers/frontals, and also by Delta-V. A structured search of the NASS-CDS was done using the SAS® statistical analysis software to extract the data for this analysis and the SUDAAN software package was used to arrive at statistical significance indicators. In addition, this paper goes on to investigate different methods for presenting results of accident database searches including significance results; a risk versus Delta-V format for specific exposures; and, a percent cumulative injury versus Delta-V format to characterize injury trends. These alternative analysis presentation methods are then discussed by example using the present study results. PMID:11558105

  6. WebArray: an online platform for microarray data analysis

    PubMed Central

    Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng

    2005-01-01

    Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165

  7. Data from fitting Gaussian process models to various data sets using eight Gaussian process software packages.

    PubMed

    Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M

    2018-06-01

    This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.

  8. Modeling and MBL: Software Tools for Science.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    Recent technological advances and new software packages put unprecedented power for experimenting and theory-building in the hands of students at all levels. Microcomputer-based laboratory (MBL) and model-solving tools illustrate the educational potential of the technology. These tools include modeling software and three MBL packages (which are…

  9. A Characteristics Approach to the Evaluation of Economics Software Packages.

    ERIC Educational Resources Information Center

    Lumsden, Keith; Scott, Alex

    1988-01-01

    Utilizes Bloom's Taxonomy to identify elements of teacher and student interest. Depicts the way in which these interests are developed into characteristics for use in analytically evaluating software. Illustrates the use of this evaluating technique by appraising the much used software package "Running the British Economy." (KO)

  10. Scientific Software: How to Find What You Need and Get What You Pay for.

    ERIC Educational Resources Information Center

    Gabaldon, Diana J.

    1984-01-01

    Provides examples of software for the sciences, including: packages for pathology/toxicology laboratories (costing over $15,000), DNA sequencing, and data acquisition/analysis; general-purpose software for scientific uses; and "custom" packages, including a program to maintain a listing of "Escherichia coli" strains and a…

  11. Learn by Yourself: The Self-Learning Tools for Qualitative Analysis Software Packages

    ERIC Educational Resources Information Center

    Freitas, Fábio; Ribeiro, Jaime; Brandão, Catarina; Reis, Luís Paulo; de Souza, Francislê Neri; Costa, António Pedro

    2017-01-01

    Computer Assisted Qualitative Data Analysis Software (CAQDAS) are tools that help researchers to develop qualitative research projects. These software packages help the users with tasks such as transcription analysis, coding and text interpretation, writing and annotation, content search and analysis, recursive abstraction, grounded theory…

  12. An Overview of Software for Conducting Dimensionality Assessment in Multidimensional Models

    ERIC Educational Resources Information Center

    Svetina, Dubravka; Levy, Roy

    2012-01-01

    An overview of popular software packages for conducting dimensionality assessment in multidimensional models is presented. Specifically, five popular software packages are described in terms of their capabilities to conduct dimensionality assessment with respect to the nature of analysis (exploratory or confirmatory), types of data (dichotomous,…

  13. Cognitive Foundry v. 3.0 (OSS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basilico, Justin; Dixon, Kevin; McClain, Jonathan

    2009-11-18

    The Cognitive Foundry is a unified collection of tools designed for research and applications that use cognitive modeling, machine learning, or pattern recognition. The software library contains design patterns, interface definitions, and default implementations of reusable software components and algorithms designed to support a wide variety of research and development needs. The library contains three main software packages: the Common package that contains basic utilities and linear algebraic methods, the Cognitive Framework package that contains tools to assist in implementing and analyzing theories of cognition, and the Machine Learning package that provides general algorithms and methods for populating Cognitive Frameworkmore » components from domain-relevant data.« less

  14. SIMPATIQCO: a server-based software suite which facilitates monitoring the time course of LC-MS performance metrics on Orbitrap instruments.

    PubMed

    Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl

    2012-11-02

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.

  15. SIMPATIQCO: A Server-Based Software Suite Which Facilitates Monitoring the Time Course of LC–MS Performance Metrics on Orbitrap Instruments

    PubMed Central

    2012-01-01

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386

  16. PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.

    PubMed

    Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan

    2009-01-01

    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.

  17. PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan

    2009-01-01

    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561

  18. Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python.

    PubMed

    Gorgolewski, Krzysztof; Burns, Christopher D; Madison, Cindee; Clark, Dav; Halchenko, Yaroslav O; Waskom, Michael L; Ghosh, Satrajit S

    2011-01-01

    Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research.

  19. Nipype: A Flexible, Lightweight and Extensible Neuroimaging Data Processing Framework in Python

    PubMed Central

    Gorgolewski, Krzysztof; Burns, Christopher D.; Madison, Cindee; Clark, Dav; Halchenko, Yaroslav O.; Waskom, Michael L.; Ghosh, Satrajit S.

    2011-01-01

    Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research. PMID:21897815

  20. MicroSIFT Courseware Evaluations. [Set 11 (223-259), Set 12 (260-293), and a Special Set of 99 LIBRA Reviews of Junior High School Science Software, Including Subject and Title Indexes Covering Sets 1-12 and Special Set L].

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This document consists of 170 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. Set 11 consists of 37 packages. Set 12 consists of 34 packages. A special unnumbered set, entitled LIBRA Reviews, treats 99 packages…

  1. DRUGDOG 3.0: U.S. Navy Random Urinalysis Software Package

    DTIC Science & Technology

    1994-03-15

    NAVAL PO11GRADUATE SCHOOL Monterey, California AD-A281 748 THESIS LJuEoTE DRUGDOG 3.0: U. S . NAVY RANDOM URINALYSIS SOFTWARE PACKAGE by (% Dale E...ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED 15 MAR 94 Master’s Thesis 4. TITLE AND SUBTITLE DRUGDOG 3.0: U. S . NAVY RANDOM 5...FUNDING NUMBERS URINALYSIS SOFTWARE PACKAGE 6. AUTHOR( S ) Dale E. Wilson 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8. PERFORMING Naval

  2. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  3. Improving the quality of EHR recording in primary care: a data quality feedback tool.

    PubMed

    van der Bij, Sjoukje; Khan, Nasra; Ten Veen, Petra; de Bakker, Dinny H; Verheij, Robert A

    2017-01-01

    Electronic health record (EHR) data are used to exchange information among health care providers. For this purpose, the quality of the data is essential. We developed a data quality feedback tool that evaluates differences in EHR data quality among practices and software packages as part of a larger intervention. The tool was applied in 92 practices in the Netherlands using different software packages. Practices received data quality feedback in 2010 and 2012. We observed large differences in the quality of recording. For example, the percentage of episodes of care that had a meaningful diagnostic code ranged from 30% to 100%. Differences were highly related to the software package. A year after the first measurement, the quality of recording had improved significantly and differences decreased, with 67% of the physicians indicating that they had actively changed their recording habits based on the results of the first measurement. About 80% found the feedback helpful in pinpointing recording problems. One of the software vendors made changes in functionality as a result of the feedback. Our EHR data quality feedback tool is capable of highlighting differences among practices and software packages. As such, it also stimulates improvements. As substantial variability in recording is related to the software package, our study strengthens the evidence that data quality can be improved substantially by standardizing the functionalities of EHR software packages. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Information Metacatalog for a Grid

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    SWIM is a Software Information Metacatalog that gathers detailed information about the software components and packages installed on a grid resource. Information is currently gathered for Executable and Linking Format (ELF) executables and shared libraries, Java classes, shell scripts, and Perl and Python modules. SWIM is built on top of the POUR framework, which is described in the preceding article. SWIM consists of a set of Perl modules for extracting software information from a system, an XML schema defining the format of data that can be added by users, and a POUR XML configuration file that describes how these elements are used to generate periodic, on-demand, and user-specified information. Periodic software information is derived mainly from the package managers used on each system. SWIM collects information from native package managers in FreeBSD, Solaris, and IRX as well as the RPM, Perl, and Python package managers on multiple platforms. Because not all software is available, or installed in package form, SWIM also crawls the set of relevant paths from the File System Hierarchy Standard that defines the standard file system structure used by all major UNIX distributions. Using these two techniques, the vast majority of software installed on a system can be located. SWIM computes the same information gathered by the periodic routines for specific files on specific hosts, and locates software on a system given only its name and type.

  5. The Package-Based Development Process in the Flight Dynamics Division

    NASA Technical Reports Server (NTRS)

    Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil

    1997-01-01

    The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.

  6. Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project

    NASA Astrophysics Data System (ADS)

    Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo

    2017-04-01

    The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool and the suite of tests (easily manageable by means of ctest tools) greatly reduces the burden of the installation and allows us to enhance portability on different compilers and Operating system platforms. The package was also complemented by several software tools which provide web-based visualization of results based on R plugins, in particular "shiny" (Chang at al, 2016), "geotopbricks" and "geotopOptim2" (Cordano et al, 2016) packages, which allow rapid and efficient scientific validation of new examples and tests. The software re-engineering activities are still under development. However, our first results are promising enough to eventually reach a robust and stable software project that manages in a flexible way a complex state-of-the-art hydrological model like GEOtop and integrates it into wider workflows.

  7. Multiple-solution problems in a statistics classroom: an example

    NASA Astrophysics Data System (ADS)

    Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing

    2017-11-01

    The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.

  8. Global review of open access risk assessment software packages valid for global or continental scale analysis

    NASA Astrophysics Data System (ADS)

    Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan

    2015-04-01

    Over the past few decades, a plethora of open access software packages for the calculation of earthquake, volcanic, tsunami, storm surge, wind and flood have been produced globally. As part of the World Bank GFDRR Review released at the Understanding Risk 2014 Conference, over 80 such open access risk assessment software packages were examined. Commercial software was not considered in the evaluation. A preliminary analysis was used to determine whether the 80 models were currently supported and if they were open access. This process was used to select a subset of 31 models that include 8 earthquake models, 4 cyclone models, 11 flood models, and 8 storm surge/tsunami models for more detailed analysis. By using multi-criteria analysis (MCDA) and simple descriptions of the software uses, the review allows users to select a few relevant software packages for their own testing and development. The detailed analysis evaluated the models on the basis of over 100 criteria and provides a synopsis of available open access natural hazard risk modelling tools. In addition, volcano software packages have since been added making the compendium of risk software tools in excess of 100. There has been a huge increase in the quality and availability of open access/source software over the past few years. For example, private entities such as Deltares now have an open source policy regarding some flood models (NGHS). In addition, leaders in developing risk models in the public sector, such as Geoscience Australia (EQRM, TCRM, TsuDAT, AnuGA) or CAPRA (ERN-Flood, Hurricane, CRISIS2007 etc.), are launching and/or helping many other initiatives. As we achieve greater interoperability between modelling tools, we will also achieve a future wherein different open source and open access modelling tools will be increasingly connected and adapted towards unified multi-risk model platforms and highly customised solutions. It was seen that many software tools could be improved by enabling user-defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.

  9. General-Purpose Ada Software Packages

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.

    1991-01-01

    Collection of subprograms brings to Ada many features from other programming languages. All generic packages designed to be easily instantiated for types declared in user's facility. Most packages have widespread applicability, although some oriented for avionics applications. All designed to facilitate writing new software in Ada. Written on IBM/AT personal computer running under PC DOS, v.3.1.

  10. Advance Directives and Do Not Resuscitate Orders

    MedlinePlus

    ... a form. Call a lawyer. Use a computer software package for legal documents. Advance directives and living ... you write by yourself or with a computer software package should follow your state laws. You may ...

  11. Nested Cohort - R software package

    Cancer.gov

    NestedCohort is an R software package for fitting Kaplan-Meier and Cox Models to estimate standardized survival and attributable risks for studies where covariates of interest are observed on only a sample of the cohort.

  12. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  13. Software packager user's guide

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.

  14. MicroSIFT Courseware Evaluations (169-198). Set 9. Including Subject and Title Indexes Covering Sets 1-9.

    ERIC Educational Resources Information Center

    Weaver, Dave, Ed.

    This document consists of 30 microcomputer software package evaluations prepared for the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Educational Laboratory (NWREL). The concise, single-sheet resume describing and evaluating each software package includes source, cost, ability level,…

  15. Annotated Bibliography of Computer Software for Teaching Early Reading and Spelling. Project RIMES 2000.

    ERIC Educational Resources Information Center

    Rhein, Deborah; Alibrandi, Mary; Lyons, Mary; Sammons, Janice; Doyle, Luther

    This bibliography, developed by Project RIMES (Reading Instructional Methods of Efficacy with Students) lists 80 software packages for teaching early reading and spelling to students at risk for reading and spelling failure. The software packages are presented alphabetically by title. Entries usually include a grade level indicator, a brief…

  16. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  17. NOBAI: a web server for character coding of geometrical and statistical features in RNA structure

    PubMed Central

    Knudsen, Vegeir; Caetano-Anollés, Gustavo

    2008-01-01

    The Numeration of Objects in Biology: Alignment Inferences (NOBAI) web server provides a web interface to the applications in the NOBAI software package. This software codes topological and thermodynamic information related to the secondary structure of RNA molecules as multi-state phylogenetic characters, builds character matrices directly in NEXUS format and provides sequence randomization options. The web server is an effective tool that facilitates the search for evolutionary history embedded in the structure of functional RNA molecules. The NOBAI web server is accessible at ‘http://www.manet.uiuc.edu/nobai/nobai.php’. This web site is free and open to all users and there is no login requirement. PMID:18448469

  18. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  19. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salama, A.; Mikhail, M.

    Comprehensive software packages have been developed at the Western Research Centre as tools to help coal preparation engineers analyze, evaluate, and control coal cleaning processes. The COal Preparation Software package (COPS) performs three functions: (1) data handling and manipulation, (2) data analysis, including the generation of washability data, performance evaluation and prediction, density and size modeling, evaluation of density and size partition characteristics and attrition curves, and (3) generation of graphics output. The Separation ChARacteristics Estimation software packages (SCARE) are developed to balance raw density or size separation data. The cases of density and size separation data are considered. Themore » generated balanced data can take the balanced or normalized forms. The scaled form is desirable for direct determination of the partition functions (curves). The raw and generated separation data are displayed in tabular and/or graphical forms. The computer softwares described in this paper are valuable tools for coal preparation plant engineers and operators for evaluating process performance, adjusting plant parameters, and balancing raw density or size separation data. These packages have been applied very successfully in many projects carried out by WRC for the Canadian coal preparation industry. The software packages are designed to run on a personal computer (PC).« less

  1. Software Library for Bruker TopSpin NMR Data Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A software library for parsing and manipulating frequency-domain data files that have been processed using the Bruker TopSpin NMR software package. In the context of NMR, the term "processed" indicates that the end-user of the Bruker TopSpin NMR software package has (a) Fourier transformed the raw, time-domain data (the Free Induction Decay) into the frequency-domain and (b) has extracted the list of NMR peaks.

  2. Investigation into the development of computer aided design software for space based sensors

    NASA Technical Reports Server (NTRS)

    Pender, C. W.; Clark, W. L.

    1987-01-01

    The described effort is phase one of the development of a Computer Aided Design (CAD) software to be used to perform radiometric sensor design. The software package will be referred to as SCAD and is directed toward the preliminary phase of the design of space based sensor system. The approach being followed is to develop a modern, graphic intensive, user friendly software package using existing software as building blocks. The emphasis will be directed toward the development of a shell containing menus, smart defaults, and interfaces, which can accommodate a wide variety of existing application software packages. The shell will offer expected utilities such as graphics, tailored menus, and a variety of drivers for I/O devices. Following the development of the shell, the development of SCAD is planned as chiefly selection and integration of appropriate building blocks. The phase one development activities have included: the selection of hardware which will be used with SCAD; the determination of the scope of SCAD; the preliminary evaluation of a number of software packages for applicability to SCAD; determination of a method for achieving required capabilities where voids exist; and then establishing a strategy for binding the software modules into an easy to use tool kit.

  3. Effect of plasma spraying modes on material properties of internal combustion engine cylinder liners

    NASA Astrophysics Data System (ADS)

    Timokhova, O. M.; Burmistrova, O. N.; Sirina, E. A.; Timokhov, R. S.

    2018-03-01

    The paper analyses different methods of remanufacturing worn-out machine parts in order to get the best performance characteristics. One of the most promising of them is a plasma spraying method. The mathematical models presented in the paper are intended to anticipate the results of plasma spraying, its effect on the properties of the material of internal combustion engine cylinder liners under repair. The experimental data and research results have been computer processed with Statistica 10.0 software package. The pare correlation coefficient values (R) and F-statistic criterion are given to confirm the statistical properties and adequacy of obtained regression equations.

  4. Software Review. Macintosh Laboratory Automation: Three Software Packages.

    ERIC Educational Resources Information Center

    Jezl, Barbara Ann

    1990-01-01

    Reviewed are "LABTECH NOTEBOOK,""LabVIEW," and "Parameter Manager pmPLUS/pmTALK." Each package is described including functions, uses, hardware, and costs. Advantages and disadvantages of this type of laboratory approach are discussed. (CW)

  5. eddy4R 0.2.0: a DevOps model for community-extensible processing and analysis of eddy-covariance data based on R, Git, Docker, and HDF5

    NASA Astrophysics Data System (ADS)

    Metzger, Stefan; Durden, David; Sturtevant, Cove; Luo, Hongyan; Pingintha-Durden, Natchaya; Sachs, Torsten; Serafimovich, Andrei; Hartmann, Jörg; Li, Jiahong; Xu, Ke; Desai, Ankur R.

    2017-08-01

    Large differences in instrumentation, site setup, data format, and operating system stymie the adoption of a universal computational environment for processing and analyzing eddy-covariance (EC) data. This results in limited software applicability and extensibility in addition to often substantial inconsistencies in flux estimates. Addressing these concerns, this paper presents the systematic development of portable, reproducible, and extensible EC software achieved by adopting a development and systems operation (DevOps) approach. This software development model is used for the creation of the eddy4R family of EC code packages in the open-source R language for statistical computing. These packages are community developed, iterated via the Git distributed version control system, and wrapped into a portable and reproducible Docker filesystem that is independent of the underlying host operating system. The HDF5 hierarchical data format then provides a streamlined mechanism for highly compressed and fully self-documented data ingest and output. The usefulness of the DevOps approach was evaluated for three test applications. First, the resultant EC processing software was used to analyze standard flux tower data from the first EC instruments installed at a National Ecological Observatory (NEON) field site. Second, through an aircraft test application, we demonstrate the modular extensibility of eddy4R to analyze EC data from other platforms. Third, an intercomparison with commercial-grade software showed excellent agreement (R2 = 1.0 for CO2 flux). In conjunction with this study, a Docker image containing the first two eddy4R packages and an executable example workflow, as well as first NEON EC data products are released publicly. We conclude by describing the work remaining to arrive at the automated generation of science-grade EC fluxes and benefits to the science community at large. This software development model is applicable beyond EC and more generally builds the capacity to deploy complex algorithms developed by scientists in an efficient and scalable manner. In addition, modularity permits meeting project milestones while retaining extensibility with time.

  6. Application of mixsep software package: Performance verification of male-mixed DNA analysis

    PubMed Central

    HU, NA; CONG, BIN; GAO, TAO; CHEN, YU; SHEN, JUNYI; LI, SHUJIN; MA, CHUNLING

    2015-01-01

    An experimental model of male-mixed DNA (n=297) was constructed according to the mixed DNA construction principle. This comprised the use of the Applied Biosystems (ABI) 7500 quantitative polymerase chain reaction system, with scientific validation of mixture proportion (Mx; root-mean-square error ≤0.02). Statistical analysis was performed on locus separation accuracy using mixsep, a DNA mixture separation R-package, and the analytical performance of mixsep was assessed by examining the data distribution pattern of different mixed gradients, short tandem repeat (STR) loci and mixed DNA types. The results showed that locus separation accuracy had a negative linear correlation with the mixed gradient (R2=−0.7121). With increasing mixed gradient imbalance, locus separation accuracy first increased and then decreased, with the highest value detected at a gradient of 1:3 (≥90%). The mixed gradient, which is the theoretical Mx, was one of the primary factors that influenced the success of mixed DNA analysis. Among the 16 STR loci detected by Identifiler®, the separation accuracy was relatively high (>88%) for loci D5S818, D8S1179 and FGA, whereas the median separation accuracy value was lowest for the D7S820 locus. STR loci with relatively large numbers of allelic drop-out (ADO; >15) were all located in the yellow and red channels, including loci D18S51, D19S433, FGA, TPOX and vWA. These five loci featured low allele peak heights, which was consistent with the low sensitivity of the ABI 3130xl Genetic Analyzer to yellow and red fluorescence. The locus separation accuracy of the mixsep package was substantially different with and without the inclusion of ADO loci; inclusion of ADO significantly reduced the analytical performance of the mixsep package, which was consistent with the lack of an ADO functional module in this software. The present study demonstrated that the mixsep software had a number of advantages and was recommended for analysis of mixed DNA. This software was easy to operate and produced understandable results with a degree of controllability. PMID:25936428

  7. The Implication of Using NVivo Software in Qualitative Data Analysis: Evidence-Based Reflections.

    PubMed

    Zamawe, F C

    2015-03-01

    For a long time, electronic data analysis has been associated with quantitative methods. However, Computer Assisted Qualitative Data Analysis Software (CAQDAS) are increasingly being developed. Although the CAQDAS has been there for decades, very few qualitative health researchers report using it. This may be due to the difficulties that one has to go through to master the software and the misconceptions that are associated with using CAQDAS. While the issue of mastering CAQDAS has received ample attention, little has been done to address the misconceptions associated with CAQDAS. In this paper, the author reflects on his experience of interacting with one of the popular CAQDAS (NVivo) in order to provide evidence-based implications of using the software. The key message is that unlike statistical software, the main function of CAQDAS is not to analyse data but rather to aid the analysis process, which the researcher must always remain in control of. In other words, researchers must equally know that no software can analyse qualitative data. CAQDAS are basically data management packages, which support the researcher during analysis.

  8. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  9. Development of a new modelling tool (FACET) to assess exposure to chemical migrants from food packaging.

    PubMed

    Oldring, P K T; O'Mahony, C; Dixon, J; Vints, M; Mehegan, J; Dequatre, C; Castle, L

    2014-01-01

    The approach used to obtain European Union-wide data on the usage and concentration of substances in different food packaging materials is described. Statistics were collected on pack sizes and market shares for the different materials used to package different food groups. The packaging materials covered were plastics (both flexible and rigid), metal containers, light metal packaging, paper and board, as well as the adhesives and inks used on them. An explanation as to how these data are linked in various ways in the FACET exposure modelling tool is given as well as an overview of the software along with examples of the intermediate tables of data. The example of bisphenol A (BPA), used in resins that may be incorporated into some coatings for canned foodstuffs, is used to illustrate how the data in FACET are combined to produce concentration distributions. Such concentration distributions are then linked probabilistically to the amounts of each food item consumed, as recorded in national food consumption survey diaries, in order to estimate exposure to packaging migrants. Estimates of exposure are at the level of the individual consumer and thus can be expressed for various percentiles of different populations and subpopulations covered by the national dietary surveys.

  10. ELAS: A powerful, general purpose image processing package

    NASA Technical Reports Server (NTRS)

    Walters, David; Rickman, Douglas

    1991-01-01

    ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

  11. On the release of cppxfel for processing X-ray free-electron laser images.

    PubMed

    Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K; Stuart, David Ian

    2016-06-01

    As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Here cppxfel , a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set. Cppxfel is released with the hope that the unique and useful elements of this package can be repurposed for existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.

  12. On the release of cppxfel for processing X-ray free-electron laser images

    DOE PAGES

    Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K.; ...

    2016-05-11

    As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Herecppxfel, a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set.Cppxfelis released with the hope that the unique and useful elements of this package can be repurposed formore » existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.« less

  13. An epidemiological modeling and data integration framework.

    PubMed

    Pfeifer, B; Wurz, M; Hanser, F; Seger, M; Netzer, M; Osl, M; Modre-Osprian, R; Schreier, G; Baumgartner, C

    2010-01-01

    In this work, a cellular automaton software package for simulating different infectious diseases, storing the simulation results in a data warehouse system and analyzing the obtained results to generate prediction models as well as contingency plans, is proposed. The Brisbane H3N2 flu virus, which has been spreading during the winter season 2009, was used for simulation in the federal state of Tyrol, Austria. The simulation-modeling framework consists of an underlying cellular automaton. The cellular automaton model is parameterized by known disease parameters and geographical as well as demographical conditions are included for simulating the spreading. The data generated by simulation are stored in the back room of the data warehouse using the Talend Open Studio software package, and subsequent statistical and data mining tasks are performed using the tool, termed Knowledge Discovery in Database Designer (KD3). The obtained simulation results were used for generating prediction models for all nine federal states of Austria. The proposed framework provides a powerful and easy to handle interface for parameterizing and simulating different infectious diseases in order to generate prediction models and improve contingency plans for future events.

  14. Office Computer Software: A Comprehensive Review of Software Programs.

    ERIC Educational Resources Information Center

    Secretary, 1992

    1992-01-01

    Describes types of software including system software, application software, spreadsheets, accounting software, graphics packages, desktop publishing software, database, desktop and personal information management software, project and records management software, groupware, and shareware. (JOW)

  15. Magnetic resonance imaging diffusion tensor tractography: evaluation of anatomic accuracy of different fiber tracking software packages.

    PubMed

    Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen

    2014-01-01

    Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic accuracy of the course of the displayed fibers and (ii) number of fibers displayed outside the anatomic boundaries. The mean overall grade for anatomic accuracy was 2.2 (range, 1.1-3.6) with a standard deviation (SD) of 0.9. The mean overall grade for incorrectly displayed fibers was 2.5 (range, 1.6-3.5) with a SD of 0.6. The mean grade of the overall program ranking was 2.3 with a SD of 0.6. The overall mean grade of the program ranked number one (NeuroQLab) was 1.7 (range, 1.5-2.8). The mean overall grade of the program ranked last (BrainLab iPlan Cranial 2.6 DTI Module) was 3.3 (range, 1.7-4). The difference between the mean grades of these two programs was statistically highly significant (P < 0.0001). There was no statistically significant difference between the programs ranked 1-3: NeuroQLab, Sörensen DTI Task Card, and Siemens DTI module. The results of this study show that there is a statistically significant difference in the anatomic accuracy of the tested DTI fiber tracking programs. Although incorrectly displayed fibers could lead to wrong conclusions in the neurosciences field, which relies heavily on this noninvasive imaging technique, incorrectly displayed fibers in neurosurgery could lead to surgical decisions potentially harmful for the patient if used without intraoperative cortical stimulation. DTI fiber tracking presents a valuable noninvasive preoperative imaging tool, which requires further validation after important standardization of the acquisition and processing techniques currently available. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language

    PubMed Central

    Höhna, Sebastian; Landis, Michael J.

    2016-01-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com. [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.] PMID:27235697

  17. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. George

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.

  18. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  19. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model-Specification Language.

    PubMed

    Höhna, Sebastian; Landis, Michael J; Heath, Tracy A; Boussau, Bastien; Lartillot, Nicolas; Moore, Brian R; Huelsenbeck, John P; Ronquist, Fredrik

    2016-07-01

    Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models. Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs. We developed a new open-source software package, RevBayes, to address these problems. RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models. Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev. Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users. The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models. Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses. Compared with other programs, RevBayes has fewer black-box elements. Users need to explicitly specify each part of the model and analysis. Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field. Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny. RevBayes is freely available at http://www.RevBayes.com [Bayesian inference; Graphical models; MCMC; statistical phylogenetics.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  20. Integration of modern statistical tools for the analysis of climate extremes into the web-GIS “CLIMATE”

    NASA Astrophysics Data System (ADS)

    Ryazanova, A. A.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The frequency of occurrence and magnitude of precipitation and temperature extreme events show positive trends in several geographical regions. These events must be analyzed and studied in order to better understand their impact on the environment, predict their occurrences, and mitigate their effects. For this purpose, we augmented web-GIS called “CLIMATE” to include a dedicated statistical package developed in the R language. The web-GIS “CLIMATE” is a software platform for cloud storage processing and visualization of distributed archives of spatial datasets. It is based on a combined use of web and GIS technologies with reliable procedures for searching, extracting, processing, and visualizing the spatial data archives. The system provides a set of thematic online tools for the complex analysis of current and future climate changes and their effects on the environment. The package includes new powerful methods of time-dependent statistics of extremes, quantile regression and copula approach for the detailed analysis of various climate extreme events. Specifically, the very promising copula approach allows obtaining the structural connections between the extremes and the various environmental characteristics. The new statistical methods integrated into the web-GIS “CLIMATE” can significantly facilitate and accelerate the complex analysis of climate extremes using only a desktop PC connected to the Internet.

  1. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  2. Evaluation of probabilistic forecasts with the scoringRules package

    NASA Astrophysics Data System (ADS)

    Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian

    2017-04-01

    Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.

  3. The Risa R/Bioconductor package: integrative data analysis from experimental metadata and back again

    PubMed Central

    2014-01-01

    Background The ISA-Tab format and software suite have been developed to break the silo effect induced by technology-specific formats for a variety of data types and to better support experimental metadata tracking. Experimentalists seldom use a single technique to monitor biological signals. Providing a multi-purpose, pragmatic and accessible format that abstracts away common constructs for describing Investigations, Studies and Assays, ISA is increasingly popular. To attract further interest towards the format and extend support to ensure reproducible research and reusable data, we present the Risa package, which delivers a central component to support the ISA format by enabling effortless integration with R, the popular, open source data crunching environment. Results The Risa package bridges the gap between the metadata collection and curation in an ISA-compliant way and the data analysis using the widely used statistical computing environment R. The package offers functionality for: i) parsing ISA-Tab datasets into R objects, ii) augmenting annotation with extra metadata not explicitly stated in the ISA syntax; iii) interfacing with domain specific R packages iv) suggesting potentially useful R packages available in Bioconductor for subsequent processing of the experimental data described in the ISA format; and finally v) saving back to ISA-Tab files augmented with analysis specific metadata from R. We demonstrate these features by presenting use cases for mass spectrometry data and DNA microarray data. Conclusions The Risa package is open source (with LGPL license) and freely available through Bioconductor. By making Risa available, we aim to facilitate the task of processing experimental data, encouraging a uniform representation of experimental information and results while delivering tools for ensuring traceability and provenance tracking. Software availability The Risa package is available since Bioconductor 2.11 (version 1.0.0) and version 1.2.1 appeared in Bioconductor 2.12, both along with documentation and examples. The latest version of the code is at the development branch in Bioconductor and can also be accessed from GitHub https://github.com/ISA-tools/Risa, where the issue tracker allows users to report bugs or feature requests. PMID:24564732

  4. General specifications for the development of a USL NASA PC R and D statistical analysis support package

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros

    1984-01-01

    The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.

  5. Three-Dimensional (3D) Nanometrology Based on Scanning Electron Microscope (SEM) Stereophotogrammetry.

    PubMed

    Tondare, Vipin N; Villarrubia, John S; Vlada R, András E

    2017-10-01

    Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.

  6. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  7. User Documentation for Multiple Software Releases

    NASA Technical Reports Server (NTRS)

    Humphrey, R.

    1982-01-01

    In proposed solution to problems of frequent software releases and updates, documentation would be divided into smaller packages, each of which contains data relating to only one of several software components. Changes would not affect entire document. Concept would improve dissemination of information regarding changes and would improve quality of data supporting packages. Would help to insure both timeliness and more thorough scrutiny of changes.

  8. R software package based statistical optimization of process components to simultaneously enhance the bacterial growth, laccase production and textile dye decolorization with cytotoxicity study

    PubMed Central

    Dudhagara, Pravin; Tank, Shantilal

    2018-01-01

    The thermophilic bacterium, Bacillus licheniformis U1 is used for the optimization of bacterial growth (R1), laccase production (R2) and synthetic disperse blue DBR textile dye decolorization (R3) in the present study. Preliminary optimization has been performed by one variable at time (OVAT) approach using four media components viz., dye concentration, copper sulphate concentration, pH, and inoculum size. Based on OVAT result further statistical optimization of R1, R2 and R3 performed by Box–Behnken design (BBD) using response surface methodology (RSM) in R software with R Commander package. The total 29 experimental runs conducted in the experimental design study towards the construction of a quadratic model. The model indicated that dye concentration 110 ppm, copper sulphate 0.2 mM, pH 7.5 and inoculum size 6% v/v were found to be optimum to maximize the laccase production and bacterial growth. Whereas, maximum dye decolorization achieved in media containing dye concentration 110 ppm, copper sulphate 0.6 mM, pH 6 and inoculum size 6% v/v. R package predicted R2 of R1, R2 and R3 were 0.9917, 0.9831 and 0.9703 respectively; likened to Design-Expert (Stat-Ease) (DOE) predicted R2 of R1, R2, and R3 were 0.9893, 0.9822 and 0.8442 respectively. The values obtained by R software were more precise, reliable and reproducible, compared to the DOE model. The laccase production was 1.80 fold increased, and 2.24 fold enhancement in dye decolorization was achieved using optimized medium than initial experiments. Moreover, the laccase-treated sample demonstrated the less cytotoxic effect on L132 and MCF-7 cell lines compared to untreated sample using MTT assay. Higher cell viability and lower cytotoxicity observed in a laccase-treated sample suggest the impending application of bacterial laccase in the reduction of toxicity of dye to design rapid biodegradation process. PMID:29718934

  9. Versatile Software Package For Near Real-Time Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol D.; Hoadley, Sherwood T.

    1998-01-01

    This paper provides an overview of a versatile software package developed for time- and frequency-domain analyses of experimental wind-tunnel data. This package, originally developed for analyzing data in the NASA Langley Transonic Dynamics Tunnel (TDT), is applicable for analyzing any time-domain data. A Matlab-based software package, TDT-analyzer, provides a compendium of commonly-required dynamic analysis functions in a user-friendly interactive and batch processing environment. TDT-analyzer has been used extensively to provide on-line near real-time and post-test examination and reduction of measured data acquired during wind tunnel tests of aeroelastically-scaled models of aircraft and rotorcraft as well as a flight test of the NASA High Alpha Research Vehicle (HARV) F-18. The package provides near real-time results in an informative and timely manner far exceeding prior methods of data reduction at the TDT.

  10. The Application of a Statistical Analysis Software Package to Explosive Testing

    DTIC Science & Technology

    1993-12-01

    deviation not corrected for test interval. M refer to equation 2. s refer to equation 3. G refer to section 2.1, C 36 Appendix I : Program Structured ...APPENDIX I: Program Structured Diagrams 37 APPENDIX II: Bruceton Reference Graphs 39 APPENDIX III: Input and Output Data File Format 44 APPENDIX IV...directly from Graph II, which has been digitised and incorporated into the program . IfM falls below 0.3, the curve that is closest to diff( eq . 3a) is

  11. An Integrated Nursing Management Information System: From Concept to Reality

    PubMed Central

    Pinkley, Connie L.; Sommer, Patricia K.

    1988-01-01

    This paper addresses the transition from the conceptualization of a Nursing Management Information System (NMIS) integrated and interdependent with the Hospital Information System (HIS) to its realization. Concepts of input, throughout, and output are presented to illustrate developmental strategies used to achieve nursing information products. Essential processing capabilities include: 1) ability to interact with multiple data sources; 2) database management, statistical, and graphics software packages; 3) online, batch and reporting; and 4) interactive data analysis. Challenges encountered in system construction are examined.

  12. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis

    PubMed Central

    2011-01-01

    Background A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. Results The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. Conclusions With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites. PMID:21266047

  13. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.

    PubMed

    Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi

    2011-01-25

    A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.

  14. User guide to Exploration and Graphics for RivEr Trends (EGRET) and dataRetrieval: R packages for hydrologic data

    USGS Publications Warehouse

    Hirsch, Robert M.; De Cicco, Laura A.

    2015-01-01

    Evaluating long-term changes in river conditions (water quality and discharge) is an important use of hydrologic data. To carry out such evaluations, the hydrologist needs tools to facilitate several key steps in the process: acquiring the data records from a variety of sources, structuring it in ways that facilitate the analysis, processing the data with routines that extract information about changes that may be happening, and displaying findings with graphical techniques. A pair of tightly linked R packages, called dataRetrieval and EGRET (Exploration and Graphics for RivEr Trends), have been developed for carrying out each of these steps in an integrated manner. They are designed to easily accept data from three sources: U.S. Geological Survey hydrologic data, U.S. Environmental Protection Agency (EPA) STORET data, and user-supplied flat files. The dataRetrieval package not only serves as a “front end” to the EGRET package, it can also be used to easily download many types of hydrologic data and organize it in ways that facilitate many other hydrologic applications. The EGRET package has components oriented towards the description of long-term changes in streamflow statistics (high flow, average flow, and low flow) as well as changes in water quality. For the water-quality analysis, it uses Weighted Regressions on Time, Discharge and Season (WRTDS) to describe long-term trends in both concentration and flux. EGRET also creates a wide range of graphical presentations of the water-quality data and of the WRTDS results. This report serves as a user guide to these two R packages, providing detailed guidance on installation and use of the software, documentation of the analysis methods used, as well as guidance on some of the kinds of questions and approaches that the software can facilitate.

  15. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  16. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  17. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  18. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  19. Environmental databases and other computerized information tools

    NASA Technical Reports Server (NTRS)

    Clark-Ingram, Marceia

    1995-01-01

    Increasing environmental legislation has brought about the development of many new environmental databases and software application packages to aid in the quest for environmental compliance. These databases and software packages are useful tools and applicable to a wide range of environmental areas from atmospheric modeling to materials replacement technology. The great abundance of such products and services can be very overwhelming when trying to identify the tools which best meet specific needs. This paper will discuss the types of environmental databases and software packages available. This discussion will also encompass the affected environmental areas of concern, product capabilities, and hardware requirements for product utilization.

  20. Reviews.

    ERIC Educational Resources Information Center

    Radcliffe, George; And Others

    1988-01-01

    Reviews three software packages: 1) a package containing 68 programs covering general topics in chemistry; 2) a package dealing with acid-base titration curves and allows for variables to be changed; 3) a chemistry tutorial and drill package. (MVL)

  1. Examining the effect of the computer-based educational package on quality of life and severity of hypogonadism symptoms in males.

    PubMed

    Afsharnia, Elahe; Pakgohar, Minoo; Khosravi, Shahla; Haghani, Hamid

    2018-06-01

    The objective of this study was to determine the effect of the computer-based educational package on men's QoL and the severity of their hypogonadism symptoms. A quasi-experimental study was conducted on 80 male employees. The data collection tool included the 'Aging Male Symptoms' (AMS) and 'Short Form-36' (SF36) questionnaires. Four sessions were held for the intervention group over a period of 4 weeks. Two months after training, QoL and the severity of hypogonadism symptoms were measured in both the intervention and control groups. The data were analyzed with SPSS 22 software and statistical tests, such as χ 2 , independent t-test, Fisher's exact test, and paired t-tests. Significant statistical changes were observed in the intervention group before and 2 months after the training in the QoL score in the overall dimensions of physical-psychological health and all its domains except for three domains of emotional role, social function, and pain. Furthermore, the paired t-tests showed significant differences between 2 months before and after the training in all the domains and the overall hypogonadism score in the intervention group. Based on our findings, the computer-based educational package has a positive effect on QoL and reduction of hypogonadism symptoms.

  2. Design and Implementation of Scientific Software Components to Enable Multiscale Modeling: The Effective Fragment Potential (QM/EFP) Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha

    2012-10-19

    The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less

  3. Wall adjustment strategy software for use with the NASA Langley 0.3-meter transonic cryogenic tunnel adaptive wall test section

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.

    1988-01-01

    The Wall Adjustment Strategy (WAS) software provides successful on-line control of the 2-D flexible walled test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This software package allows the level of operator intervention to be regulated as necessary for research and production type 2-D testing using and Adaptive Wall Test Section (AWTS). The software is designed to accept modification for future requirements, such as 3-D testing, with a minimum of complexity. The WAS software described is an attempt to provide a user friendly package which could be used to control any flexible walled AWTS. Control system constraints influence the details of data transfer, not the data type. Then this entire software package could be used in different control systems, if suitable interface software is available. A complete overview of the software highlights the data flow paths, the modular architecture of the software and the various operating and analysis modes available. A detailed description of the software modules includes listings of the code. A user's manual is provided to explain task generation, operating environment, user options and what to expect at execution.

  4. ProFound: Source Extraction and Application to Modern Survey Data

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Davies, L. J. M.; Driver, S. P.; Koushan, S.; Taranu, D. S.; Casura, S.; Liske, J.

    2018-05-01

    We introduce PROFOUND, a source finding and image analysis package. PROFOUND provides methods to detect sources in noisy images, generate segmentation maps identifying the pixels belonging to each source, and measure statistics like flux, size, and ellipticity. These inputs are key requirements of PROFIT, our recently released galaxy profiling package, where the design aim is that these two software packages will be used in unison to semi-automatically profile large samples of galaxies. The key novel feature introduced in PROFOUND is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. We apply PROFOUND in a number of simulated and real-world cases, and demonstrate that it behaves reasonably given its stated design goals. In particular, it offers good initial parameter estimation for PROFIT, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the PROFOUND and PROFIT pipeline, and adoption is being encouraged by publicly releasing the software for the open source R data analysis platform under an LGPL-3 license on GitHub (github.com/asgr/ProFound).

  5. RCHILD - an R-package for flexible use of the landscape evolution model CHILD

    NASA Astrophysics Data System (ADS)

    Dietze, Michael

    2014-05-01

    Landscape evolution models provide powerful approaches to numerically assess earth surface processes, to quantify rates of landscape change, infer sediment transfer rates, estimate sediment budgets, investigate the consequences of changes in external drivers on a geomorphic system, to provide spatio-temporal interpolations between known landscape states or to test conceptual hypotheses. CHILD (Channel-Hillslope Integrated Landscape Development Model) is one of the most-used models of landscape change in the context of at least tectonic and geomorphologic process interactions. Running CHILD from command line and working with the model output can be a rather awkward task (static model control via text input file, only numeric output in text files). The package RCHILD is a collection of functions for the free statistical software R that help using CHILD in a flexible, dynamic and user-friendly way. The comprised functions allow creating maps, real-time scenes, animations and further thematic plots from model output. The model input files can be modified dynamically and, hence, (feedback-related) changes in external factors can be implemented iteratively. Output files can be written to common formats that can be readily imported to standard GIS software. This contribution presents the basic functionality of the model CHILD as visualised and modified by the package. A rough overview of the available functions is given. Application examples help to illustrate the great potential of numeric modelling of geomorphologic processes.

  6. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  7. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  8. An evaluation of object-oriented image analysis techniques to identify motorized vehicle effects in semi-arid to arid ecosystems of the American West

    USGS Publications Warehouse

    Mladinich, C.

    2010-01-01

    Human disturbance is a leading ecosystem stressor. Human-induced modifications include transportation networks, areal disturbances due to resource extraction, and recreation activities. High-resolution imagery and object-oriented classification rather than pixel-based techniques have successfully identified roads, buildings, and other anthropogenic features. Three commercial, automated feature-extraction software packages (Visual Learning Systems' Feature Analyst, ENVI Feature Extraction, and Definiens Developer) were evaluated by comparing their ability to effectively detect the disturbed surface patterns from motorized vehicle traffic. Each package achieved overall accuracies in the 70% range, demonstrating the potential to map the surface patterns. The Definiens classification was more consistent and statistically valid. Copyright ?? 2010 by Bellwether Publishing, Ltd. All rights reserved.

  9. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen

    2016-04-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].

  10. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...

  11. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent. PMID:26053876

  12. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis.

    PubMed

    Wu, Yazhou; Zhou, Liang; Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent.

  13. Differential item functioning magnitude and impact measures from item response theory models.

    PubMed

    Kleinman, Marjorie; Teresi, Jeanne A

    2016-01-01

    Measures of magnitude and impact of differential item functioning (DIF) at the item and scale level, respectively are presented and reviewed in this paper. Most measures are based on item response theory models. Magnitude refers to item level effect sizes, whereas impact refers to differences between groups at the scale score level. Reviewed are magnitude measures based on group differences in the expected item scores and impact measures based on differences in the expected scale scores. The similarities among these indices are demonstrated. Various software packages are described that provide magnitude and impact measures, and new software presented that computes all of the available statistics conveniently in one program with explanations of their relationships to one another.

  14. From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline.

    PubMed

    Chen, Yunshun; Lun, Aaron T L; Smyth, Gordon K

    2016-01-01

    In recent years, RNA sequencing (RNA-seq) has become a very widely used technology for profiling gene expression. One of the most common aims of RNA-seq profiling is to identify genes or molecular pathways that are differentially expressed (DE) between two or more biological conditions. This article demonstrates a computational workflow for the detection of DE genes and pathways from RNA-seq data by providing a complete analysis of an RNA-seq experiment profiling epithelial cell subsets in the mouse mammary gland. The workflow uses R software packages from the open-source Bioconductor project and covers all steps of the analysis pipeline, including alignment of read sequences, data exploration, differential expression analysis, visualization and pathway analysis. Read alignment and count quantification is conducted using the Rsubread package and the statistical analyses are performed using the edgeR package. The differential expression analysis uses the quasi-likelihood functionality of edgeR.

  15. Integrating existing software toolkits into VO system

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Zhao, Yong-Heng; Wang, Xiaoqian; Sang, Jian; Luo, Ze

    2004-09-01

    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and "Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.

  16. Interactive graphics for the Macintosh: software review of FlexiGraphs.

    PubMed

    Antonak, R F

    1990-01-01

    While this product is clearly unique, its usefulness to individuals outside small business environments is somewhat limited. FlexiGraphs is, however, a reasonable first attempt to design a microcomputer software package that controls data through interactive editing within a graph. Although the graphics capabilities of mainframe programs such as MINITAB (Ryan, Joiner, & Ryan, 1981) and the graphic manipulations available through exploratory data analysis (e.g., Velleman & Hoaglin, 1981) will not be surpassed anytime soon by this program, a researcher may want to add this program to a software library containing other Macintosh statistics, drawing, and graphics programs if only to obtain the easy-to-obtain curve fitting and line smoothing options. I welcome the opportunity to review the enhanced "scientific" version of FlexiGraphs that the author of the program indicates is currently under development. An MS-DOS version of the program should be available within the year.

  17. Use of statistical study methods for the analysis of the results of the imitation modeling of radiation transfer

    NASA Astrophysics Data System (ADS)

    Alekseenko, M. A.; Gendrina, I. Yu.

    2017-11-01

    Recently, due to the abundance of various types of observational data in the systems of vision through the atmosphere and the need for their processing, the use of various methods of statistical research in the study of such systems as correlation-regression analysis, dynamic series, variance analysis, etc. is actual. We have attempted to apply elements of correlation-regression analysis for the study and subsequent prediction of the patterns of radiation transfer in these systems same as in the construction of radiation models of the atmosphere. In this paper, we present some results of statistical processing of the results of numerical simulation of the characteristics of vision systems through the atmosphere obtained with the help of a special software package.1

  18. ’Pushing a Big Rock Up a Steep Hill’: Acquisition Lessons Learned from DoD Applications Storefront

    DTIC Science & Technology

    2014-04-30

    software patches, web applications, widgets, and mobile application packages. The envisioned application store will deliver software from a central...automated delivery of software patches, web applications, widgets, and mobile application packages. The envisioned application store will deliver... mobile technologies, hoping to enhance warfighter situational awareness and access to information. Unfortunately, the Defense Acquisition System has not

  19. MicroSIFT Courseware Evaluation. [Set 13 (294-319), Set 14 (320-361), with Hardware (HRD) and Subject (SBJ) Indexes to Both Sets.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This document consists of 68 microcomputer software package evaluations prepared by MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. There are 26 packages in set 13 and 42 in set 14. Each software review lists producer, time and place of evaluation, cost, ability level,…

  20. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    PubMed

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Software Application Profile: Opal and Mica: open-source software solutions for epidemiological data management, harmonization and dissemination

    PubMed Central

    Doiron, Dany; Marcon, Yannick; Fortier, Isabel; Burton, Paul; Ferretti, Vincent

    2017-01-01

    Abstract Motivation Improving the dissemination of information on existing epidemiological studies and facilitating the interoperability of study databases are essential to maximizing the use of resources and accelerating improvements in health. To address this, Maelstrom Research proposes Opal and Mica, two inter-operable open-source software packages providing out-of-the-box solutions for epidemiological data management, harmonization and dissemination. Implementation Opal and Mica are two standalone but inter-operable web applications written in Java, JavaScript and PHP. They provide web services and modern user interfaces to access them. General features Opal allows users to import, manage, annotate and harmonize study data. Mica is used to build searchable web portals disseminating study and variable metadata. When used conjointly, Mica users can securely query and retrieve summary statistics on geographically dispersed Opal servers in real-time. Integration with the DataSHIELD approach allows conducting more complex federated analyses involving statistical models. Availability Opal and Mica are open-source and freely available at [www.obiba.org] under a General Public License (GPL) version 3, and the metadata models and taxonomies that accompany them are available under a Creative Commons licence. PMID:29025122

  2. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  3. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  4. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    PubMed

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  5. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  6. Diagnosis diagrams for passing signals on an automatic block signaling railway section

    NASA Astrophysics Data System (ADS)

    Spunei, E.; Piroi, I.; Chioncel, C. P.; Piroi, F.

    2018-01-01

    This work presents a diagnosis method for railway traffic security installations. More specifically, the authors present a series of diagnosis charts for passing signals on a railway block equipped with an automatic block signaling installation. These charts are based on the exploitation electric schemes, and are subsequently used to develop a diagnosis software package. The thus developed software package contributes substantially to a reduction of failure detection and remedy for these types of installation faults. The use of the software package eliminates making wrong decisions in the fault detection process, decisions that may result in longer remedy times and, sometimes, to railway traffic events.

  7. MEG and EEG data analysis with MNE-Python.

    PubMed

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-12-26

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.

  8. MEG and EEG data analysis with MNE-Python

    PubMed Central

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A.; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne. PMID:24431986

  9. Orbit determination for ISRO satellite missions

    NASA Astrophysics Data System (ADS)

    Rao, Ch. Sreehari; Sinha, S. K.

    Indian Space Research Organisation (ISRO) has been successful in using the in-house developed orbit determination and prediction software for satellite missions of Bhaskara, Rohini and APPLE. Considering the requirements of satellite missions, software packages are developed, tested and their accuracies are assessed. Orbit determination packages developed are SOIP, for low earth orbits of Bhaskara and Rohini missions, ORIGIN and ODPM, for orbits related to all phases of geo-stationary missions and SEGNIP, for drift and geo-stationary orbits. Software is tested and qualified using tracking data of SIGNE-3, D5-B, OTS, SYMPHONIE satellites with the help of software available with CNES, ESA and DFVLR. The results match well with those available from these agencies. These packages have supported orbit determination successfully throughout the mission life for all ISRO satellite missions. Member-Secretary

  10. Electronic and software subsystems for an autonomous roving vehicle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Doig, G. A.

    1980-01-01

    The complete electronics packaging which controls the Mars roving vehicle is described in order to provide a broad overview of the systems that are part of that package. Some software debugging tools are also discussed. Particular emphasis is given to those systems that are controlled by the microprocessor. These include the laser mast, the telemetry system, the command link prime interface board, and the prime software.

  11. The Shock and Vibration Digest. Volume 17, Number 4

    DTIC Science & Technology

    1985-04-01

    software packages for engineering signed to be easy to use from the outset, computations which were specifically writ- and this design philosophy is largely...re- ten for use on microcomputers. Software sponsible for their increasing popularity; packages related to shock and vibration are this same design...philosophy appears to have available for both experimental and for been carried over to the design of today’s analytical applications. Typical software

  12. Hierarchical Petascale Simulation Framework For Stress Corrosion Cracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grama, Ananth

    2013-12-18

    A number of major accomplishments resulted from the project. These include: • Data Structures, Algorithms, and Numerical Methods for Reactive Molecular Dynamics. We have developed a range of novel data structures, algorithms, and solvers (amortized ILU, Spike) for use with ReaxFF and charge equilibration. • Parallel Formulations of ReactiveMD (Purdue ReactiveMolecular Dynamics Package, PuReMD, PuReMD-GPU, and PG-PuReMD) for Messaging, GPU, and GPU Cluster Platforms. We have developed efficient serial, parallel (MPI), GPU (Cuda), and GPU Cluster (MPI/Cuda) implementations. Our implementations have been demonstrated to be significantly better than the state of the art, both in terms of performance and scalability.more » • Comprehensive Validation in the Context of Diverse Applications. We have demonstrated the use of our software in diverse systems, including silica-water, silicon-germanium nanorods, and as part of other projects, extended it to applications ranging from explosives (RDX) to lipid bilayers (biomembranes under oxidative stress). • Open Source Software Packages for Reactive Molecular Dynamics. All versions of our soft- ware have been released over the public domain. There are over 100 major research groups worldwide using our software. • Implementation into the Department of Energy LAMMPS Software Package. We have also integrated our software into the Department of Energy LAMMPS software package.« less

  13. Developing a Virtual Physics World

    ERIC Educational Resources Information Center

    Wegener, Margaret; McIntyre, Timothy J.; McGrath, Dominic; Savage, Craig M.; Williamson, Michael

    2012-01-01

    In this article, the successful implementation of a development cycle for a physics teaching package based on game-like virtual reality software is reported. The cycle involved several iterations of evaluating students' use of the package followed by instructional and software development. The evaluation used a variety of techniques, including…

  14. Teaching Science and Mathematics Subjects Using the Excel Spreadsheet Package

    ERIC Educational Resources Information Center

    Ibrahim, Dogan

    2009-01-01

    The teaching of scientific subjects usually require laboratories where students can put the theory they have learned into practice. Traditionally, electronic programmable calculators, dedicated software, or expensive software simulation packages, such as MATLAB have been used to simulate scientific experiments. Recently, spreadsheet programs have…

  15. Description of the IV + V System Software Package.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management: An International Journal for Library and Information Services, 1984

    1984-01-01

    Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…

  16. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  17. Genome-wide study of correlations between genomic features and their relationship with the regulation of gene expression.

    PubMed

    Kravatsky, Yuri V; Chechetkin, Vladimir R; Tchurikov, Nikolai A; Kravatskaya, Galina I

    2015-02-01

    The broad class of tasks in genetics and epigenetics can be reduced to the study of various features that are distributed over the genome (genome tracks). The rapid and efficient processing of the huge amount of data stored in the genome-scale databases cannot be achieved without the software packages based on the analytical criteria. However, strong inhomogeneity of genome tracks hampers the development of relevant statistics. We developed the criteria for the assessment of genome track inhomogeneity and correlations between two genome tracks. We also developed a software package, Genome Track Analyzer, based on this theory. The theory and software were tested on simulated data and were applied to the study of correlations between CpG islands and transcription start sites in the Homo sapiens genome, between profiles of protein-binding sites in chromosomes of Drosophila melanogaster, and between DNA double-strand breaks and histone marks in the H. sapiens genome. Significant correlations between transcription start sites on the forward and the reverse strands were observed in genomes of D. melanogaster, Caenorhabditis elegans, Mus musculus, H. sapiens, and Danio rerio. The observed correlations may be related to the regulation of gene expression in eukaryotes. Genome Track Analyzer is freely available at http://ancorr.eimb.ru/. © The Author 2015. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  18. InterFace: A software package for face image warping, averaging, and principal components analysis.

    PubMed

    Kramer, Robin S S; Jenkins, Rob; Burton, A Mike

    2017-12-01

    We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.

  19. Evolution of a modular software network

    PubMed Central

    Fortuna, Miguel A.; Bonachela, Juan A.; Levin, Simon A.

    2011-01-01

    “Evolution behaves like a tinkerer” (François Jacob, Science, 1977). Software systems provide a singular opportunity to understand biological processes using concepts from network theory. The Debian GNU/Linux operating system allows us to explore the evolution of a complex network in a unique way. The modular design detected during its growth is based on the reuse of existing code in order to minimize costs during programming. The increase of modularity experienced by the system over time has not counterbalanced the increase in incompatibilities between software packages within modules. This negative effect is far from being a failure of design. A random process of package installation shows that the higher the modularity, the larger the fraction of packages working properly in a local computer. The decrease in the relative number of conflicts between packages from different modules avoids a failure in the functionality of one package spreading throughout the entire system. Some potential analogies with the evolutionary and ecological processes determining the structure of ecological networks of interacting species are discussed. PMID:22106260

  20. A graphical user interface for RAId, a knowledge integrated proteomics analysis suite with accurate statistics.

    PubMed

    Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo

    2018-03-15

    RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId's core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goal here. We have constructed a graphical user interface to facilitate the use of RAId on users' local machines. Written in Java, RAId_GUI not only makes easy executions of RAId but also provides tools for data/spectra visualization, MS-product analysis, molecular isotopic distribution analysis, and graphing the retrieval versus the proportion of false discoveries. The results viewer displays and allows the users to download the analyses results. Both the knowledge-integrated organismal databases and the code package (containing source code, the graphical user interface, and a user manual) are available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/raid.html .

  1. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0

    NASA Astrophysics Data System (ADS)

    The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.

  2. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0.

    PubMed

    The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.

  3. Image analysis software versus direct anthropometry for breast measurements.

    PubMed

    Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako

    2014-10-01

    To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.

  4. PyPedal, an open source software package for pedigree analysis

    USDA-ARS?s Scientific Manuscript database

    The open source software package PyPedal (http://pypedal.sourceforge.net/) was first released in 2002, and provided users with a set of simple tools for manipulating pedigrees. Its flexibility has been demonstrated by its used in a number of settings for large and small populations. After substantia...

  5. A Simple Interactive Software Package for Plotting, Animating, and Calculating

    ERIC Educational Resources Information Center

    Engelhardt, Larry

    2012-01-01

    We introduce a new open source (free) software package that provides a simple, highly interactive interface for carrying out certain mathematical tasks that are commonly encountered in physics. These tasks include plotting and animating functions, solving systems of coupled algebraic equations, and basic calculus (differentiating and integrating…

  6. A Software Development Approach for Computer Assisted Language Learning

    ERIC Educational Resources Information Center

    Cushion, Steve

    2005-01-01

    Over the last 5 years we have developed, produced, tested, and evaluated an authoring software package to produce web-based, interactive, audio-enhanced language-learning material. That authoring package has been used to produce language-learning material in French, Spanish, German, Arabic, and Tamil. We are currently working on increasing…

  7. A Multi-User Microcomputer System for Small Libraries.

    ERIC Educational Resources Information Center

    Leggate, Peter

    1988-01-01

    Describes the development of Bookshelf, a multi-user microcomputer system for small libraries that uses an integrated software package. The discussion covers the design parameters of the package, which were based on a survey of seven small libraries, and some characteristics of the software. (three notes with references) (CLB)

  8. Microcomputer Software Programs for Vocational Education.

    ERIC Educational Resources Information Center

    Rodenstein, Judith, Ed.; Lambert, Roger, Ed.

    Over 200 microcomputer software packages applicable to vocational education are listed. Most of the programs are available for the Apple, TRS-80, and Commodore microcomputers. The packages have been reviewed, but have not been formally evaluated. Titles of the programs with names and addresses of the distributors are provided. Telephone numbers…

  9. Software, Copyright, and Site-License Agreements: Publishers' Perspective of Library Practice.

    ERIC Educational Resources Information Center

    Happer, Stephanie K.

    Thirty-one academic publishers of stand-alone software and book/disk packages were surveyed to determine whether publishers have addressed the copyright issues inherent in circulating these packages within the library environment. Twenty-two questionnaires were returned, providing a 71% return rate. There were 18 usable questionnaires. Publishers…

  10. Computerised data reduction.

    PubMed

    Datson, D J; Carter, N G

    1988-10-01

    The use of personal computers in accountancy and business generally has been stimulated by the availability of flexible software packages. We describe the implementation of a commercial software package designed for interfacing with laboratory instruments and highlight the ease with which it can be implemented, without the need for specialist computer programming staff.

  11. wbstats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piburn, Jesse

    2016-04-22

    Availability of accessing the World Bank Data API through the R language was limited to one existing package, which is limited in its ability. The software provides access to all of the features in World Bank API in one software package for the R language and provides functions for searching and downloading data from the World Bank API.

  12. Propensity Score Analysis in R: A Software Review

    ERIC Educational Resources Information Center

    Keller, Bryan; Tipton, Elizabeth

    2016-01-01

    In this article, we review four software packages for implementing propensity score analysis in R: "Matching, MatchIt, PSAgraphics," and "twang." After briefly discussing essential elements for propensity score analysis, we apply each package to a data set from the Early Childhood Longitudinal Study in order to estimate the…

  13. Software engineering and data management for automated payload experiment tool

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Provancha, Anna; Chattam, David

    1994-01-01

    The Microgravity Projects Office identified a need to develop a software package that will lead experiment developers through the development planning process, obtain necessary information, establish an electronic data exchange avenue, and allow easier manipulation/reformatting of the collected information. An MS-DOS compatible software package called the Automated Payload Experiment Tool (APET) has been developed and delivered. The objective of this task is to expand on the results of the APET work previously performed by UAH and provide versions of the software in a Macintosh and Windows compatible format.

  14. Advanced fingerprint verification software

    NASA Astrophysics Data System (ADS)

    Baradarani, A.; Taylor, J. R. B.; Severin, F.; Maev, R. Gr.

    2016-05-01

    We have developed a fingerprint software package that can be used in a wide range of applications from law enforcement to public and private security systems, and to personal devices such as laptops, vehicles, and door- locks. The software and processing units are a unique implementation of new and sophisticated algorithms that compete with the current best systems in the world. Development of the software package has been in line with the third generation of our ultrasonic fingerprinting machine1. Solid and robust performance is achieved in the presence of misplaced and low quality fingerprints.

  15. Gro2mat: a package to efficiently read gromacs output in MATLAB.

    PubMed

    Dien, Hung; Deane, Charlotte M; Knapp, Bernhard

    2014-07-30

    Molecular dynamics (MD) simulations are a state-of-the-art computational method used to investigate molecular interactions at atomic scale. Interaction processes out of experimental reach can be monitored using MD software, such as Gromacs. Here, we present the gro2mat package that allows fast and easy access to Gromacs output files from Matlab. Gro2mat enables direct parsing of the most common Gromacs output formats including the binary xtc-format. No openly available Matlab parser currently exists for this format. The xtc reader is orders of magnitudes faster than other available pdb/ascii workarounds. Gro2mat is especially useful for scientists with an interest in quick prototyping of new mathematical and statistical approaches for Gromacs trajectory analyses. © 2014 Wiley Periodicals, Inc. Copyright © 2014 Wiley Periodicals, Inc.

  16. Kinematic analysis of total knee prosthesis designed for Asian population.

    PubMed

    Low, F H; Khoo, L P; Chua, C K; Lo, N N

    2000-01-01

    In designing a total knee replacement (TKR) prosthesis catering for the Asian population, 62 sets of femur were harvested and analyzed. The morphometrical data obtained were found to be in good agreement with dimensions typical of the Asian knee and has reaffirmed the fact that Caucasian knees are generally larger than Asian knees. Subsequently, these data when treated using a multivariate statistical technique resulted in the establishment of major design parameters for six different sizes of femoral implants. An extra-small implant size with established dimensions and geometrical shape has surfaced from the study. The differences between the Asian knees and the Caucasian knees are discussed. Employing the established femoral dimensions and motion path of the knee joint, the articulating tibia profile was generated. All the sizes of implants were modeled using a computer-aided software package. Thereupon, these models that accurately fits the local Asian knee were transported into a dynamic and kinematic analysis software package. The tibiofemoral joint was modeled successfully as a slide curve joint to study intuitively the motion of the femur when articulating on the tibia surface. An optimal tibia profile could be synthesized to mimic the natural knee path motion. Details of the analysis are presented and discussed.

  17. Minfi: a flexible and comprehensive Bioconductor package for the analysis of Infinium DNA methylation microarrays

    PubMed Central

    Aryee, Martin J.; Jaffe, Andrew E.; Corrada-Bravo, Hector; Ladd-Acosta, Christine; Feinberg, Andrew P.; Hansen, Kasper D.; Irizarry, Rafael A.

    2014-01-01

    Motivation: The recently released Infinium HumanMethylation450 array (the ‘450k’ array) provides a high-throughput assay to quantify DNA methylation (DNAm) at ∼450 000 loci across a range of genomic features. Although less comprehensive than high-throughput sequencing-based techniques, this product is more cost-effective and promises to be the most widely used DNAm high-throughput measurement technology over the next several years. Results: Here we describe a suite of computational tools that incorporate state-of-the-art statistical techniques for the analysis of DNAm data. The software is structured to easily adapt to future versions of the technology. We include methods for preprocessing, quality assessment and detection of differentially methylated regions from the kilobase to the megabase scale. We show how our software provides a powerful and flexible development platform for future methods. We also illustrate how our methods empower the technology to make discoveries previously thought to be possible only with sequencing-based methods. Availability and implementation: http://bioconductor.org/packages/release/bioc/html/minfi.html. Contact: khansen@jhsph.edu; rafa@jimmy.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24478339

  18. A 125 year history of topographic mapping and GIS in the U.S. Geological Survey 1884-2009, part 2: 1980-2009

    USGS Publications Warehouse

    Usery, E. Lynn; Varanka, Dalia; Finn, Michael P.

    2009-01-01

    The United States Geological Survey (USGS) entered the mainstream of developments in computer-assisted technology for mapping during the 1970s. The introduction by USGS of digital line graphs (DLGs), digital elevation models (DEMs), and land use data analysis (LUDA) nationwide land-cover data provided a base for the rapid expansion of the use of GIS in the 1980s. Whereas USGS had developed the topologically structured DLG data and the Geographic Information Retrieval and Analysis System (GIRAS) for land-cover data, the Map Overlay Statistical System (MOSS), a nontopologically structured GIS software package developed by Autometric, Inc., under contract to the U.S. Fish and Wildlife Service, dominated the use of GIS by federal agencies in the 1970s. Thus, USGS data was used in MOSS, but the topological structure, which later became a requirement for GIS vector datasets, was not used in early GIS applications. The introduction of Esri's ARC/INFO in 1982 changed that, and by the end of the 1980s, topological structure for vector data was essential, and ARC/INFO was the dominant GIS software package used by federal agencies.

  19. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felberg, Lisa E.; Brookes, David H.; Yap, Eng-Hui

    2016-11-02

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmannmore » Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.« less

  20. The AIROPA software package: milestones for testing general relativity in the strong gravity regime with AO

    NASA Astrophysics Data System (ADS)

    Witzel, Gunther; Lu, Jessica R.; Ghez, Andrea M.; Martinez, Gregory D.; Fitzgerald, Michael P.; Britton, Matthew; Sitarski, Breann N.; Do, Tuan; Campbell, Randall D.; Service, Maxwell; Matthews, Keith; Morris, Mark R.; Becklin, E. E.; Wizinowich, Peter L.; Ragland, Sam; Doppmann, Greg; Neyman, Chris; Lyke, James; Kassis, Marc; Rizzi, Luca; Lilley, Scott; Rampy, Rachel

    2016-07-01

    General relativity can be tested in the strong gravity regime by monitoring stars orbiting the supermassive black hole at the Galactic Center with adaptive optics. However, the limiting source of uncertainty is the spatial PSF variability due to atmospheric anisoplanatism and instrumental aberrations. The Galactic Center Group at UCLA has completed a project developing algorithms to predict PSF variability for Keck AO images. We have created a new software package (AIROPA), based on modified versions of StarFinder and Arroyo, that takes atmospheric turbulence profiles, instrumental aberration maps, and images as inputs and delivers improved photometry and astrometry on crowded fields. This software package will be made publicly available soon.

  1. WannierTools: An open-source software package for novel topological materials

    NASA Astrophysics Data System (ADS)

    Wu, QuanSheng; Zhang, ShengNan; Song, Hai-Feng; Troyer, Matthias; Soluyanov, Alexey A.

    2018-03-01

    We present an open-source software package WannierTools, a tool for investigation of novel topological materials. This code works in the tight-binding framework, which can be generated by another software package Wannier90 (Mostofi et al., 2008). It can help to classify the topological phase of a given material by calculating the Wilson loop, and can get the surface state spectrum, which is detected by angle resolved photoemission (ARPES) and in scanning tunneling microscopy (STM) experiments. It also identifies positions of Weyl/Dirac points and nodal line structures, calculates the Berry phase around a closed momentum loop and Berry curvature in a part of the Brillouin zone (BZ).

  2. Use of symbolic computation in robotics education

    NASA Technical Reports Server (NTRS)

    Vira, Naren; Tunstel, Edward

    1992-01-01

    An application of symbolic computation in robotics education is described. A software package is presented which combines generality, user interaction, and user-friendliness with the systematic usage of symbolic computation and artificial intelligence techniques. The software utilizes MACSYMA, a LISP-based symbolic algebra language, to automatically generate closed-form expressions representing forward and inverse kinematics solutions, the Jacobian transformation matrices, robot pose error-compensation models equations, and Lagrange dynamics formulation for N degree-of-freedom, open chain robotic manipulators. The goal of such a package is to aid faculty and students in the robotics course by removing burdensome tasks of mathematical manipulations. The software package has been successfully tested for its accuracy using commercially available robots.

  3. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  4. OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis

    NASA Astrophysics Data System (ADS)

    Grohmann, C. H.; Campanha, G. A.

    2010-12-01

    Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of each dataset (poles, great circles etc). The GUI shows the opened files in a tree structure, similar to “layers” of many illustration software, where the vertical order of the files in the tree reflects the drawing order of the selected elements. At this stage, the software performs plotting operations of poles to planes, lineations, great circles, density contours and rose diagrams. A set of statistics is calculated for each file and its eigenvalues and eigenvectors are used to suggest if the data is clustered about a mean value or distributed along a girdle. Modified Flinn, Triangular and histograms plots are also available. Next step of development will focus on tools as merging and rotation of datasets, possibility to save 'projects' and paleostress analysis. In its current state, OpenStereo requires Python, wxPython, Numpy and Matplotlib installed in the system. We recommend installing PythonXY or the Enthought Python Distribution on MS-Windows and MacOS machines, since all dependencies are provided. Most Linux distributions provide an easy way to install all dependencies through software repositories. OpenStereo is released under the GNU General Public License. Programmers willing to contribute are encouraged to contact the authors directly. FAPESP Grant #09/17675-5

  5. A User-Friendly Software Package for HIFU Simulation

    NASA Astrophysics Data System (ADS)

    Soneson, Joshua E.

    2009-04-01

    A freely-distributed, MATLAB (The Mathworks, Inc., Natick, MA)-based software package for simulating axisymmetric high-intensity focused ultrasound (HIFU) beams and their heating effects is discussed. The package (HIFU_Simulator) consists of a propagation module which solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and a heating module which solves Pennes' bioheat transfer (BHT) equation. The pressure, intensity, heating rate, temperature, and thermal dose fields are computed, plotted, the output is released to the MATLAB workspace for further user analysis or postprocessing.

  6. Demeter-W

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-09-27

    Demeter-W, an open-access software written in Python, consists of extensible module packages. It is developed with statistical downscaling algorithms, to spatially and temporally downscale water demand data into finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. For better understanding of the driving forces and patterns for global water withdrawal, the researchers is able to utilize Demeter-W to reconstruct the data sets to examine the issues related to water withdrawals at fine spatial and temporal scales.

  7. Software package for modeling spin-orbit motion in storage rings

    NASA Astrophysics Data System (ADS)

    Zyuzin, D. V.

    2015-12-01

    A software package providing a graphical user interface for computer experiments on the motion of charged particle beams in accelerators, as well as analysis of obtained data, is presented. The software package was tested in the framework of the international project on electric dipole moment measurement JEDI (Jülich Electric Dipole moment Investigations). The specific features of particle spin motion imply the requirement to use a cyclic accelerator (storage ring) consisting of electrostatic elements, which makes it possible to preserve horizontal polarization for a long time. Computer experiments study the dynamics of 106-109 particles in a beam during 109 turns in an accelerator (about 1012-1015 integration steps for the equations of motion). For designing an optimal accelerator structure, a large number of computer experiments on polarized beam dynamics are required. The numerical core of the package is COSY Infinity, a program for modeling spin-orbit dynamics.

  8. Reference datasets for bioequivalence trials in a two-group parallel design.

    PubMed

    Fuglsang, Anders; Schütz, Helmut; Labes, Detlew

    2015-03-01

    In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.

  9. Painting a picture across the landscape with ModelMap

    Treesearch

    Brian Cooke; Elizabeth Freeman; Gretchen Moisen; Tracey Frescino

    2017-01-01

    Scientists and statisticians working for the Rocky Mountain Research Station have created a software package that simplifies and automates many of the processes needed for converting models into maps. This software package, called ModelMap, has helped a variety of specialists and land managers to quickly convert data into easily understood graphical images. The...

  10. "FluSpec": A Simulated Experiment in Fluorescence Spectroscopy

    ERIC Educational Resources Information Center

    Bigger, Stephen W.; Bigger, Andrew S.; Ghiggino, Kenneth P.

    2014-01-01

    The "FluSpec" educational software package is a fully contained tutorial on the technique of fluorescence spectroscopy as well as a simulator on which experiments can be performed. The procedure for each of the experiments is also contained within the package along with example analyses of results that are obtained using the software.

  11. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    ERIC Educational Resources Information Center

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  12. Macintosh Computer Classroom and Laboratory Security: Preventing Unwanted Changes to the System.

    ERIC Educational Resources Information Center

    Senn, Gary J.; Smyth, Thomas J. C.

    Because of the graphical interface and "openness" of the operating system, Macintosh computers are susceptible to undesirable changes by the user. This presentation discusses the advantages and disadvantages of software packages that offer protection for the Macintosh system. The two basic forms of software security packages include a…

  13. AnClim and ProClimDB software for data quality control and homogenization of time series

    NASA Astrophysics Data System (ADS)

    Stepanek, Petr

    2015-04-01

    During the last decade, a software package consisting of AnClim, ProClimDB and LoadData for processing (mainly climatological) data has been created. This software offers a complex solution for processing of climatological time series, starting from loading the data from a central database (e.g. Oracle, software LoadData), through data duality control and homogenization to time series analysis, extreme value evaluations and RCM outputs verification and correction (ProClimDB and AnClim software). The detection of inhomogeneities is carried out on a monthly scale through the application of AnClim, or newly by R functions called from ProClimDB, while quality control, the preparation of reference series and the correction of found breaks is carried out by the ProClimDB software. The software combines many statistical tests, types of reference series and time scales (monthly, seasonal and annual, daily and sub-daily ones). These can be used to create an "ensemble" of solutions, which may be more reliable than any single method. AnClim software is suitable for educational purposes: e.g. for students getting acquainted with methods used in climatology. Built-in graphical tools and comparison of various statistical tests help in better understanding of a given method. ProClimDB is, on the contrary, tool aimed for processing of large climatological datasets. Recently, functions from R may be used within the software making it more efficient in data processing and capable of easy inclusion of new methods (when available under R). An example of usage is easy comparison of methods for correction of inhomogeneities in daily data (HOM of Paul Della-Marta, SPLIDHOM method of Olivier Mestre, DAP - own method, QM of Xiaolan Wang and others). The software is available together with further information on www.climahom.eu . Acknowledgement: this work was partially funded by the project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.

  14. Virginia Transit Performance Evaluation Package (VATPEP).

    DOT National Transportation Integrated Search

    1987-01-01

    The Virginia Transit Performance Evaluation Package (VATPEP), a computer software package, is documented. This is the computerized version of the methodology used by the Virginia Department of Transportation to evaluate the performance of public tran...

  15. Cooperative Work and Sustainable Scientific Software Practices in R

    NASA Astrophysics Data System (ADS)

    Weber, N.

    2013-12-01

    Most scientific software projects are dependent on the work of many diverse people, institutions and organizations. Incentivizing these actors to cooperatively develop software that is both reliable, and sustainable is complicated by the fact that the reward structures of these various actors greatly differ: research scientists want results from a software or model run in order to publish papers, produce new data, or test a hypothesis; software engineers and research centers want compilable, well documented code that is refactorable, reusable and reproducible in future research scenarios. While much research has been done on incentives and motivations for participating in open source software projects or cyberinfrastrcture development, little work has been done on what motivates or incentivizes developers to maintain scientific software projects beyond their original application. This poster will present early results of research into the incentives and motivation for cooperative scientific software development. In particular, this work focuses on motivations for the maintenance and repair of libraries on the software platform R. Our work here uses a sample of R packages that were created by research centers, or are specific to earth, environmental and climate science applications. We first mined 'check' logs from the Comprehensive R Archive Network (CRAN) to determine the amount of time a package has existed, the number of versions it has gone through over this time, the number of releases, and finally the contact information for each official package 'maintainer'. We then sent a survey to each official maintainer, asking them questions about what role they played in developing the original package, and what their motivations were for sustaining the project over time. We will present early results from this mining and our survey of R maintainers.

  16. Laser velocimeter data acquisition system for the Langley 14- by 22-foot subsonic tunnel. Software reference guide version 3.3

    NASA Technical Reports Server (NTRS)

    Jumper, Judith K.

    1994-01-01

    The Laser Velocimeter Data Acquisition System (LVDAS) in the Langley 14- by 22-Foot Tunnel is controlled by a comprehensive software package. The software package was designed to control the data acquisition process during wind tunnel tests which employ a laser velocimeter measurement system. This report provides detailed explanations on how to configure and operate the LVDAS system to acquire laser velocimeter and static wind tunnel data.

  17. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; hide

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  18. Development of high performance scientific components for interoperability of computing packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achievedmore » by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.« less

  19. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    PubMed

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Predictive Utility of Marketed Volumetric Software Tools in Subjects at Risk for Alzheimer's: Do Regions Outside the Hippocampus Matter?

    PubMed Central

    Tanpitukpongse, Teerath P.; Mazurowski, Maciej A.; Ikhena, John; Petrella, Jeffrey R.

    2016-01-01

    Background and Purpose To assess prognostic efficacy of individual versus combined regional volumetrics in two commercially-available brain volumetric software packages for predicting conversion of patients with mild cognitive impairment to Alzheimer's disease. Materials and Methods Data was obtained through the Alzheimer's Disease Neuroimaging Initiative. 192 subjects (mean age 74.8 years, 39% female) diagnosed with mild cognitive impairment at baseline were studied. All had T1WI MRI sequences at baseline and 3-year clinical follow-up. Analysis was performed with NeuroQuant® and Neuroreader™. Receiver operating characteristic curves assessing the prognostic efficacy of each software package were generated using a univariable approach employing individual regional brain volumes, as well as two multivariable approaches (multiple regression and random forest), combining multiple volumes. Results On univariable analysis of 11 NeuroQuant® and 11 Neuroreader™ regional volumes, hippocampal volume had the highest area under the curve for both software packages (0.69 NeuroQuant®, 0.68 Neuroreader™), and was not significantly different (p > 0.05) between packages. Multivariable analysis did not increase the area under the curve for either package (0.63 logistic regression, 0.60 random forest NeuroQuant®; 0.65 logistic regression, 0.62 random forest Neuroreader™). Conclusion Of the multiple regional volume measures available in FDA-cleared brain volumetric software packages, hippocampal volume remains the best single predictor of conversion of mild cognitive impairment to Alzheimer's disease at 3-year follow-up. Combining volumetrics did not add additional prognostic efficacy. Therefore, future prognostic studies in MCI, combining such tools with demographic and other biomarker measures, are justified in using hippocampal volume as the only volumetric biomarker. PMID:28057634

  1. Comparison of Perfusion CT Software to Predict the Final Infarct Volume After Thrombectomy.

    PubMed

    Austein, Friederike; Riedel, Christian; Kerby, Tina; Meyne, Johannes; Binder, Andreas; Lindner, Thomas; Huhndorf, Monika; Wodarg, Fritz; Jansen, Olav

    2016-09-01

    Computed tomographic perfusion represents an interesting physiological imaging modality to select patients for reperfusion therapy in acute ischemic stroke. The purpose of our study was to determine the accuracy of different commercial perfusion CT software packages (Philips (A), Siemens (B), and RAPID (C)) to predict the final infarct volume (FIV) after mechanical thrombectomy. Single-institutional computed tomographic perfusion data from 147 mechanically recanalized acute ischemic stroke patients were postprocessed. Ischemic core and FIV were compared about thrombolysis in cerebral infarction (TICI) score and time interval to reperfusion. FIV was measured at follow-up imaging between days 1 and 8 after stroke. In 118 successfully recanalized patients (TICI 2b/3), a moderately to strongly positive correlation was observed between ischemic core and FIV. The highest accuracy and best correlation are shown in early and fully recanalized patients (Pearson r for A=0.42, B=0.64, and C=0.83; P<0.001). Bland-Altman plots and boxplots demonstrate smaller ranges in package C than in A and B. Significant differences were found between the packages about over- and underestimation of the ischemic core. Package A, compared with B and C, estimated more than twice as many patients with a malignant stroke profile (P<0.001). Package C best predicted hypoperfusion volume in nonsuccessfully recanalized patients. Our study demonstrates best accuracy and approximation between the results of a fully automated software (RAPID) and FIV, especially in early and fully recanalized patients. Furthermore, this software package overestimated the FIV to a significantly lower degree and estimated a malignant mismatch profile less often than other software. © 2016 American Heart Association, Inc.

  2. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  3. Statistical tests for detecting associations with groups of genetic variants: generalization, evaluation, and implementation

    PubMed Central

    Ferguson, John; Wheeler, William; Fu, YiPing; Prokunina-Olsson, Ludmila; Zhao, Hongyu; Sampson, Joshua

    2013-01-01

    With recent advances in sequencing, genotyping arrays, and imputation, GWAS now aim to identify associations with rare and uncommon genetic variants. Here, we describe and evaluate a class of statistics, generalized score statistics (GSS), that can test for an association between a group of genetic variants and a phenotype. GSS are a simple weighted sum of single-variant statistics and their cross-products. We show that the majority of statistics currently used to detect associations with rare variants are equivalent to choosing a specific set of weights within this framework. We then evaluate the power of various weighting schemes as a function of variant characteristics, such as MAF, the proportion associated with the phenotype, and the direction of effect. Ultimately, we find that two classical tests are robust and powerful, but details are provided as to when other GSS may perform favorably. The software package CRaVe is available at our website (http://dceg.cancer.gov/bb/tools/crave). PMID:23092956

  4. CORSSA: Community Online Resource for Statistical Seismicity Analysis

    NASA Astrophysics Data System (ADS)

    Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.

    2011-12-01

    Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.

  5. User's Guide for MapIMG 2: Map Image Re-projection Software Package

    USGS Publications Warehouse

    Finn, Michael P.; Trent, Jason R.; Buehler, Robert A.

    2006-01-01

    BACKGROUND Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in commercial software packages, but implementation with data other than points requires specific adaptation of the transformation equations or prior preparation of the data to allow the transformation to succeed. It seems that some of these packages use the U.S. Geological Survey's (USGS) General Cartographic Transformation Package (GCTP) or similar point transformations without adaptation to the specific characteristics of raster data (Usery and others, 2003a). Usery and others (2003b) compiled and tabulated the accuracy of categorical areas in projected raster datasets of global extent. Based on the shortcomings identified in these studies, geographers and applications programmers at the USGS expanded and evolved a USGS software package, MapIMG, for raster map projection transformation (Finn and Trent, 2004). Daniel R. Steinwand of Science Applications International Corporation, National Center for Earth Resources Observation and Science, originally developed MapIMG for the USGS, basing it on GCTP. Through previous and continuing efforts at the USGS' National Geospatial Technical Operations Center, this program has been transformed from an application based on command line input into a software package based on a graphical user interface for Windows, Linux, and other UNIX machines.

  6. CMIP: a software package capable of reconstructing genome-wide regulatory networks using gene expression data.

    PubMed

    Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang

    2016-12-23

    A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .

  7. Surface CHEMKIN (Version 4. 0): A Fortran package for analyzing heterogeneous chemical kinetics at a solid-surface---gas-phase interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coltrin, M.E.; Kee, R.J.; Rupley, F.M.

    1991-07-01

    Heterogeneous reaction at the interface between a solid surface and adjacent gas is central to many chemical processes. Our purpose for developing the software package SURFACE CHEMKIN was motivated by our need to understand the complex surface chemistry in chemical vapor deposition systems involving silicon, silicon nitride, and gallium arsenide. However, we have developed the approach and implemented the software in a general setting. Thus, we expect it will find use in such diverse applications as chemical vapor deposition, chemical etching, combustion of solids, and catalytic processes, and for a wide range of chemical systems. We believe that it providesmore » a powerful capability to help model, understand, and optimize important industrial and research chemical processes. The SURFACE CHEMKIN software is designed to work in conjunction with the CHEMKIN-2 software, which handles the chemical kinetics in the gas phase. It may also be used in conjunction with the Transport Property Package, which provides information about molecular diffusion. Thus, these three packages provide a foundation on which a user can build applications software to analyze gas-phase and heterogeneous chemistry in flowing systems. These packages should not be considered programs'' in the ordinary sense. That is, they are not designed to accept input, solve a particular problem, and report the answer. Instead, they are software tools intended to help a user work efficiently with large systems of chemical reactions and develop Fortran representations of systems of equations that define a particular problem. It is up the user to solve the problem and interpret the answer. 11 refs., 15 figs., 5 tabs.« less

  8. Flight simulation software at NASA Dryden Flight Research Center

    NASA Technical Reports Server (NTRS)

    Norlin, Ken A.

    1995-01-01

    The NASA Dryden Flight Research Center has developed a versatile simulation software package that is applicable to a broad range of fixed-wing aircraft. This package has evolved in support of a variety of flight research programs. The structure is designed to be flexible enough for use in batch-mode, real-time pilot-in-the-loop, and flight hardware-in-the-loop simulation. Current simulations operate on UNIX-based platforms and are coded with a FORTRAN shell and C support routines. This paper discusses the features of the simulation software design and some basic model development techniques. The key capabilities that have been included in the simulation are described. The NASA Dryden simulation software is in use at other NASA centers, within industry, and at several universities. The straightforward but flexible design of this well-validated package makes it especially useful in an engineering environment.

  9. The Ettention software package.

    PubMed

    Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp

    2016-02-01

    We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.

    PubMed

    Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa

    2017-06-05

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Defining Geodetic Reference Frame using Matlab®: PlatEMotion 2.0

    NASA Astrophysics Data System (ADS)

    Cannavò, Flavio; Palano, Mimmo

    2016-03-01

    We describe the main features of the developed software tool, namely PlatE-Motion 2.0 (PEM2), which allows inferring the Euler pole parameters by inverting the observed velocities at a set of sites located on a rigid block (inverse problem). PEM2 allows also calculating the expected velocity value for any point located on the Earth providing an Euler pole (direct problem). PEM2 is the updated version of a previous software tool initially developed for easy-to-use file exchange with the GAMIT/GLOBK software package. The software tool is developed in Matlab® framework and, as the previous version, includes a set of MATLAB functions (m-files), GUIs (fig-files), map data files (mat-files) and user's manual as well as some example input files. New changes in PEM2 include (1) some bugs fixed, (2) improvements in the code, (3) improvements in statistical analysis, (4) new input/output file formats. In addition, PEM2 can be now run under the majority of operating systems. The tool is open source and freely available for the scientific community.

  12. Geospatial approach towards enumerative analysis of suspended sediment concentration for Ganges-Brahmaputra Bay

    NASA Astrophysics Data System (ADS)

    Pandey, Palak; Kunte, Pravin D.

    2016-10-01

    This study presents an easy, modular, user-friendly, and flexible software package for processing of Landsat 7 ETM and Landsat 8 OLI-TIRS data for estimating suspended particulate matter concentrations in the coastal waters. This package includes 1) algorithm developed using freely downloadable SCILAB package, 2) ERDAS Models for iterative processing of Landsat images and 3) ArcMAP tool for plotting and map making. Utilizing SCILAB package, a module is written for geometric corrections, radiometric corrections and obtaining normalized water-leaving reflectance by incorporating Landsat 8 OLI-TIRS and Landsat 7 ETM+ data. Using ERDAS models, a sequence of modules are developed for iterative processing of Landsat images and estimating suspended particulate matter concentrations. Processed images are used for preparing suspended sediment concentration maps. The applicability of this software package is demonstrated by estimating and plotting seasonal suspended sediment concentration maps off the Bengal delta. The software is flexible enough to accommodate other remotely sensed data like Ocean Color monitor (OCM) data, Indian Remote Sensing data (IRS), MODIS data etc. by replacing a few parameters in the algorithm, for estimating suspended sediment concentration in coastal waters.

  13. Detection And Mapping (DAM) package. Volume 4A: Software System Manual, part 1

    NASA Technical Reports Server (NTRS)

    Schlosser, E. H.

    1980-01-01

    The package is an integrated set of manual procedures, computer programs, and graphic devices designed for efficient production of precisely registered and formatted maps from digital LANDSAT multispectral scanner (MSS) data. The software can be readily implemented on any Univac 1100 series computer with standard peripheral equipment. This version of the software includes predefined spectral limits for use in classifying and mapping surface water for LANDSAT-1, LANDSAT-2, and LANDSAT-3. Tape formats supported include X, AM, and PM.

  14. The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool.

    PubMed

    Müller-Linow, Mark; Pinto-Espinosa, Francisco; Scharr, Hanno; Rascher, Uwe

    2015-01-01

    Three-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution. We developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles. We show that a stereo imaging setup together with the appropriate image processing tools is capable of retrieving the geometric leaf surface properties of plants and canopies. Our software package provides whole-leaf statistics but also a local estimation of leaf angles, which may have great potential to better understand and quantify structural canopy traits for guided breeding and optimized crop management.

  15. A Microcomputer-Based Software Package for Eye-Monitoring Research. Technical Report No. 434.

    ERIC Educational Resources Information Center

    McConkie, George W.; And Others

    A software package is described that collects and reduces eye behavior data (eye position and pupil size) using an IBM-PC compatible computer. Written in C language for speed and portability, it includes several features: (1) data can be simultaneously collected from other sources (such as electroencephalography and electromyography); (2)…

  16. Overview of Current Activities in Combustion Instability

    DTIC Science & Technology

    2015-10-02

    and avoid liquid rocket engine combustion stability problems Approach:  1) Develop a  SOA  combustion stability software package  called Stable...phase II will invest in Multifidelity Tools and Methodologies – CSTD will develop a SOA combustion stability software package called Stable Combustion

  17. Sigma 2 Graphic Display Software Program Description

    NASA Technical Reports Server (NTRS)

    Johnson, B. T.

    1973-01-01

    A general purpose, user oriented graphic support package was implemented. A comprehensive description of the two software components comprising this package is given: Display Librarian and Display Controller. These programs have been implemented in FORTRAN on the XDS Sigma 2 Computer Facility. This facility consists of an XDS Sigma 2 general purpose computer coupled to a Computek Display Terminal.

  18. Textbook Software versus Professional Software: Which Is Better for Instructional Purposes?

    ERIC Educational Resources Information Center

    Snell, Meggan; Yatsenko, Olga

    2002-01-01

    Compares textbook software with professional packages such as Peachtree for teaching accounting, in terms of cost, availability, ease of teaching and learning, and applicability. Makes suggestions for choosing accounting software. (SK)

  19. Object-oriented design of medical imaging software.

    PubMed

    Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R

    1994-01-01

    A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.

  20. Feature tracking CMR reveals abnormal strain in preclinical arrhythmogenic right ventricular dysplasia/ cardiomyopathy: a multisoftware feasibility and clinical implementation study.

    PubMed

    Bourfiss, Mimount; Vigneault, Davis M; Aliyari Ghasebeh, Mounes; Murray, Brittney; James, Cynthia A; Tichnell, Crystal; Mohamed Hoesein, Firdaus A; Zimmerman, Stefan L; Kamel, Ihab R; Calkins, Hugh; Tandri, Harikrishna; Velthuis, Birgitta K; Bluemke, David A; Te Riele, Anneline S J M

    2017-09-01

    Regional right ventricular (RV) dysfunction is the hallmark of Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy (ARVD/C), but is currently only qualitatively evaluated in the clinical setting. Feature Tracking Cardiovascular Magnetic Resonance (FT-CMR) is a novel quantitative method that uses cine CMR to calculate strain values. However, most prior FT-CMR studies in ARVD/C have focused on global RV strain using different software methods, complicating implementation of FT-CMR in clinical practice. We aimed to assess the clinical value of global and regional strain using FT-CMR in ARVD/C and to determine differences between commercially available FT-CMR software packages. We analyzed cine CMR images of 110 subjects (39 overt ARVD/C [mutation+/phenotype+], 40 preclinical ARVD/C [mutation+/phenotype-] and 31 control) for global and regional (subtricuspid, anterior, apical) RV strain in the horizontal longitudinal axis using four FT-CMR software methods (Multimodality Tissue Tracking, TomTec, Medis and Circle Cardiovascular Imaging). Intersoftware agreement was assessed using Bland Altman plots. For global strain, all methods showed reduced strain in overt ARVD/C patients compared to control subjects (p < 0.041), whereas none distinguished preclinical from control subjects (p > 0.275). For regional strain, overt ARVD/C patients showed reduced strain compared to control subjects in all segments which reached statistical significance in the subtricuspid region for all software methods (p < 0.037), in the anterior wall for two methods (p < 0.005) and in the apex for one method (p = 0.012). Preclinical subjects showed abnormal subtricuspid strain compared to control subjects using one of the software methods (p = 0.009). Agreement between software methods for absolute strain values was low (Intraclass Correlation Coefficient = 0.373). Despite large intersoftware variability of FT-CMR derived strain values, all four software methods distinguished overt ARVD/C patients from control subjects by both global and subtricuspid strain values. In the subtricuspid region, one software package distinguished preclinical from control subjects, suggesting the potential to identify early ARVD/C prior to overt disease expression.

  1. Software Application Profile: Opal and Mica: open-source software solutions for epidemiological data management, harmonization and dissemination.

    PubMed

    Doiron, Dany; Marcon, Yannick; Fortier, Isabel; Burton, Paul; Ferretti, Vincent

    2017-10-01

    Improving the dissemination of information on existing epidemiological studies and facilitating the interoperability of study databases are essential to maximizing the use of resources and accelerating improvements in health. To address this, Maelstrom Research proposes Opal and Mica, two inter-operable open-source software packages providing out-of-the-box solutions for epidemiological data management, harmonization and dissemination. Opal and Mica are two standalone but inter-operable web applications written in Java, JavaScript and PHP. They provide web services and modern user interfaces to access them. Opal allows users to import, manage, annotate and harmonize study data. Mica is used to build searchable web portals disseminating study and variable metadata. When used conjointly, Mica users can securely query and retrieve summary statistics on geographically dispersed Opal servers in real-time. Integration with the DataSHIELD approach allows conducting more complex federated analyses involving statistical models. Opal and Mica are open-source and freely available at [www.obiba.org] under a General Public License (GPL) version 3, and the metadata models and taxonomies that accompany them are available under a Creative Commons licence. © The Author 2017; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association

  2. BaCoCa--a heuristic software tool for the parallel assessment of sequence biases in hundreds of gene and taxon partitions.

    PubMed

    Kück, Patrick; Struck, Torsten H

    2014-01-01

    BaCoCa (BAse COmposition CAlculator) is a user-friendly software that combines multiple statistical approaches (like RCFV and C value calculations) to identify biases in aligned sequence data which potentially mislead phylogenetic reconstructions. As a result of its speed and flexibility, the program provides the possibility to analyze hundreds of pre-defined gene partitions and taxon subsets in one single process run. BaCoCa is command-line driven and can be easily integrated into automatic process pipelines of phylogenomic studies. Moreover, given the tab-delimited output style the results can be easily used for further analyses in programs like Excel or statistical packages like R. A built-in option of BaCoCa is the generation of heat maps with hierarchical clustering of certain results using R. As input files BaCoCa can handle FASTA and relaxed PHYLIP, which are commonly used in phylogenomic pipelines. BaCoCa is implemented in Perl and works on Windows PCs, Macs and Linux operating systems. The executable source code as well as example test files and a detailed documentation of BaCoCa are freely available at http://software.zfmk.de. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. StreamThermal: A software package for calculating thermal metrics from stream temperature data

    USGS Publications Warehouse

    Tsang, Yin-Phan; Infante, Dana M.; Stewart, Jana S.; Wang, Lizhu; Tingly, Ralph; Thornbrugh, Darren; Cooper, Arthur; Wesley, Daniel

    2016-01-01

    Improving quality and better availability of continuous stream temperature data allows natural resource managers, particularly in fisheries, to understand associations between different characteristics of stream thermal regimes and stream fishes. However, there is no convenient tool to efficiently characterize multiple metrics reflecting stream thermal regimes with the increasing amount of data. This article describes a software program packaged as a library in R to facilitate this process. With this freely-available package, users will be able to quickly summarize metrics that describe five categories of stream thermal regimes: magnitude, variability, frequency, timing, and rate of change. The installation and usage instruction of this package, the definition of calculated thermal metrics, as well as the output format from the package are described, along with an application showing the utility for multiple metrics. We believe this package can be widely utilized by interested stakeholders and greatly assist more studies in fisheries.

  4. skelesim: an extensible, general framework for population genetic simulation in R.

    PubMed

    Parobek, Christian M; Archer, Frederick I; DePrenger-Levin, Michelle E; Hoban, Sean M; Liggins, Libby; Strand, Allan E

    2017-01-01

    Simulations are a key tool in molecular ecology for inference and forecasting, as well as for evaluating new methods. Due to growing computational power and a diversity of software with different capabilities, simulations are becoming increasingly powerful and useful. However, the widespread use of simulations by geneticists and ecologists is hindered by difficulties in understanding these softwares' complex capabilities, composing code and input files, a daunting bioinformatics barrier and a steep conceptual learning curve. skelesim (an R package) guides users in choosing appropriate simulations, setting parameters, calculating genetic summary statistics and organizing data output, in a reproducible pipeline within the R environment. skelesim is designed to be an extensible framework that can 'wrap' around any simulation software (inside or outside the R environment) and be extended to calculate and graph any genetic summary statistics. Currently, skelesim implements coalescent and forward-time models available in the fastsimcoal2 and rmetasim simulation engines to produce null distributions for multiple population genetic statistics and marker types, under a variety of demographic conditions. skelesim is intended to make simulations easier while still allowing full model complexity to ensure that simulations play a fundamental role in molecular ecology investigations. skelesim can also serve as a teaching tool: demonstrating the outcomes of stochastic population genetic processes; teaching general concepts of simulations; and providing an introduction to the R environment with a user-friendly graphical user interface (using shiny). © 2016 John Wiley & Sons Ltd.

  5. ''Do-it-yourself'' software program calculates boiler efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-03-01

    An easy-to-use software package is described which runs on the IBM Personal Computer. The package calculates boiler efficiency, an important parameter of operating costs and equipment wellbeing. The program stores inputs and calculated results for 20 sets of boiler operating data, called cases. Cases can be displayed and modified on the CRT screen through multiple display pages or copied to a printer. All intermediate calculations are performed by this package. They include: steam enthalpy; water enthalpy; air humidity; gas, oil, coal, and wood heat capacity; and radiation losses.

  6. Error correction and diversity analysis of population mixtures determined by NGS

    PubMed Central

    Burroughs, Nigel J.; Evans, David J.; Ryabov, Eugene V.

    2014-01-01

    The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site. PMID:25405074

  7. Extending R packages to support 64-bit compiled code: An illustration with spam64 and GIMMS NDVI3g data

    NASA Astrophysics Data System (ADS)

    Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard

    2017-07-01

    Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.

  8. Statistical tools for analysis and modeling of cosmic populations and astronomical time series: CUDAHM and TSE

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas; Budavari, Tamas; Scargle, Jeffrey D.

    2018-01-01

    This presentation provides an overview of open-source software packages addressing two challenging classes of astrostatistics problems. (1) CUDAHM is a C++ framework for hierarchical Bayesian modeling of cosmic populations, leveraging graphics processing units (GPUs) to enable applying this computationally challenging paradigm to large datasets. CUDAHM is motivated by measurement error problems in astronomy, where density estimation and linear and nonlinear regression must be addressed for populations of thousands to millions of objects whose features are measured with possibly complex uncertainties, potentially including selection effects. An example calculation demonstrates accurate GPU-accelerated luminosity function estimation for simulated populations of $10^6$ objects in about two hours using a single NVIDIA Tesla K40c GPU. (2) Time Series Explorer (TSE) is a collection of software in Python and MATLAB for exploratory analysis and statistical modeling of astronomical time series. It comprises a library of stand-alone functions and classes, as well as an application environment for interactive exploration of times series data. The presentation will summarize key capabilities of this emerging project, including new algorithms for analysis of irregularly-sampled time series.

  9. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    NASA Astrophysics Data System (ADS)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  10. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    PubMed Central

    Vergara-Perez, Sandra; Marucho, Marcelo

    2015-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848

  11. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    PubMed

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  12. A survey of tools for the analysis of quantitative PCR (qPCR) data.

    PubMed

    Pabinger, Stephan; Rödiger, Stefan; Kriegner, Albert; Vierlinger, Klemens; Weinhäusel, Andreas

    2014-09-01

    Real-time quantitative polymerase-chain-reaction (qPCR) is a standard technique in most laboratories used for various applications in basic research. Analysis of qPCR data is a crucial part of the entire experiment, which has led to the development of a plethora of methods. The released tools either cover specific parts of the workflow or provide complete analysis solutions. Here, we surveyed 27 open-access software packages and tools for the analysis of qPCR data. The survey includes 8 Microsoft Windows, 5 web-based, 9 R-based and 5 tools from other platforms. Reviewed packages and tools support the analysis of different qPCR applications, such as RNA quantification, DNA methylation, genotyping, identification of copy number variations, and digital PCR. We report an overview of the functionality, features and specific requirements of the individual software tools, such as data exchange formats, availability of a graphical user interface, included procedures for graphical data presentation, and offered statistical methods. In addition, we provide an overview about quantification strategies, and report various applications of qPCR. Our comprehensive survey showed that most tools use their own file format and only a fraction of the currently existing tools support the standardized data exchange format RDML. To allow a more streamlined and comparable analysis of qPCR data, more vendors and tools need to adapt the standardized format to encourage the exchange of data between instrument software, analysis tools, and researchers.

  13. The NOD3 software package: A graphical user interface-supported reduction package for single-dish radio continuum and polarisation observations

    NASA Astrophysics Data System (ADS)

    Müller, Peter; Krause, Marita; Beck, Rainer; Schmidt, Philip

    2017-10-01

    Context. The venerable NOD2 data reduction software package for single-dish radio continuum observations, which was developed for use at the 100-m Effelsberg radio telescope, has been successfully applied over many decades. Modern computing facilities, however, call for a new design. Aims: We aim to develop an interactive software tool with a graphical user interface for the reduction of single-dish radio continuum maps. We make a special effort to reduce the distortions along the scanning direction (scanning effects) by combining maps scanned in orthogonal directions or dual- or multiple-horn observations that need to be processed in a restoration procedure. The package should also process polarisation data and offer the possibility to include special tasks written by the individual user. Methods: Based on the ideas of the NOD2 package we developed NOD3, which includes all necessary tasks from the raw maps to the final maps in total intensity and linear polarisation. Furthermore, plot routines and several methods for map analysis are available. The NOD3 package is written in Python, which allows the extension of the package via additional tasks. The required data format for the input maps is FITS. Results: The NOD3 package is a sophisticated tool to process and analyse maps from single-dish observations that are affected by scanning effects from clouds, receiver instabilities, or radio-frequency interference. The "basket-weaving" tool combines orthogonally scanned maps into a final map that is almost free of scanning effects. The new restoration tool for dual-beam observations reduces the noise by a factor of about two compared to the NOD2 version. Combining single-dish with interferometer data in the map plane ensures the full recovery of the total flux density. Conclusions: This software package is available under the open source license GPL for free use at other single-dish radio telescopes of the astronomical community. The NOD3 package is designed to be extendable to multi-channel data represented by data cubes in Stokes I, Q, and U.

  14. Sampling and counting genome rearrangement scenarios

    PubMed Central

    2015-01-01

    Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124

  15. A complete solution classification and unified algorithmic treatment for the one- and two-step asymmetric S-transverse mass event scale statistic

    NASA Astrophysics Data System (ADS)

    Walker, Joel W.

    2014-08-01

    The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.

  16. Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package

    NASA Astrophysics Data System (ADS)

    Cheng, L.; AghaKouchak, A.; Gilleland, E.

    2013-12-01

    Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.

  17. Find Pairs: The Module for Protein Quantification of the PeakQuant Software Suite

    PubMed Central

    Eisenacher, Martin; Kohl, Michael; Wiese, Sebastian; Hebeler, Romano; Meyer, Helmut E.

    2012-01-01

    Abstract Accurate quantification of proteins is one of the major tasks in current proteomics research. To address this issue, a wide range of stable isotope labeling techniques have been developed, allowing one to quantitatively study thousands of proteins by means of mass spectrometry. In this article, the FindPairs module of the PeakQuant software suite is detailed. It facilitates the automatic determination of protein abundance ratios based on the automated analysis of stable isotope-coded mass spectrometric data. Furthermore, it implements statistical methods to determine outliers due to biological as well as technical variance of proteome data obtained in replicate experiments. This provides an important means to evaluate the significance in obtained protein expression data. For demonstrating the high applicability of FindPairs, we focused on the quantitative analysis of proteome data acquired in 14N/15N labeling experiments. We further provide a comprehensive overview of the features of the FindPairs software, and compare these with existing quantification packages. The software presented here supports a wide range of proteomics applications, allowing one to quantitatively assess data derived from different stable isotope labeling approaches, such as 14N/15N labeling, SILAC, and iTRAQ. The software is publicly available at http://www.medizinisches-proteom-center.de/software and free for academic use. PMID:22909347

  18. Automated data collection in single particle electron microscopy

    PubMed Central

    Tan, Yong Zi; Cheng, Anchi; Potter, Clinton S.; Carragher, Bridget

    2016-01-01

    Automated data collection is an integral part of modern workflows in single particle electron microscopy (EM) research. This review surveys the software packages available for automated single particle EM data collection. The degree of automation at each stage of data collection is evaluated, and the capabilities of the software packages are described. Finally, future trends in automation are discussed. PMID:26671944

  19. Defense AT and L. Volume 42, Number 1

    DTIC Science & Technology

    2013-02-01

    Agnish The U.S. Army late last year began equipping brigade combat teams with its first package of radios, satellite systems, software applications...Army’s first package of radios, satellite systems, software applications, smartphone-like devices, and other network components that provide integrated... satellite communications, intelligence, mission command applications, and the integration of C4ISR equip- ment onto various vehicle platforms. This

  20. Increasing the Number of Replications in Item Response Theory Simulations: Automation through SAS and Disk Operating System

    ERIC Educational Resources Information Center

    Gagne, Phill; Furlow, Carolyn; Ross, Terris

    2009-01-01

    In item response theory (IRT) simulation research, it is often necessary to use one software package for data generation and a second software package to conduct the IRT analysis. Because this can substantially slow down the simulation process, it is sometimes offered as a justification for using very few replications. This article provides…

  1. Software for Teaching about AIDS & Sex: A Critical Review of Products. A MicroSIFT Report.

    ERIC Educational Resources Information Center

    Weaver, Dave

    This document contains critical reviews of 10 microcomputer software packages and two interactive videodisc products designed for use in teaching about Acquired Immune Deficiency Syndrome (AIDS) and sex at the secondary school level and above. Each package was reviewed by one or two secondary school health teachers and by a staff member from the…

  2. User's manual for the VAX-Gerber link software package. Revision 1. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isobe, G.W.

    1985-10-01

    This manual provides a user the information necessary to run the VAX-Gerber link software package. It is expected that the user already knows how to login to the VAX, and is familiar with the Gerber Photo Plotter. It is also highly desirable that the user be familiar with the full screen editor on the VAX, EDT.

  3. Intraobserver and Interobserver Agreement of Structural and Functional Software Programs for Measuring Glaucoma Progression.

    PubMed

    Moreno-Montañés, Javier; Antón, Vanesa; Antón, Alfonso; Larrosa, José M; Martinez-de-la-Casa, José María; Rebolleda, Gema; Ussa, Fernando; García-Granero, Marta

    2017-04-01

    It is important to evaluate intraobserver and interobserver agreement using visual field (VF) testing and optical coherence tomography (OCT) software in order to understand whether the use of this software is sufficient to detect glaucoma progression and to make decisions regarding its treatment. To evaluate agreement in VF and OCT software among 5 glaucoma specialists. The printout pages from VF progression software and OCT progression software from 100 patients were randomized, and the 5 glaucoma specialists subjectively and independently evaluated them for glaucoma. Each image was classified as having no progression, questionable progression, or progression. The principal investigator classified the patients previously as without variability (normal) or with high variability among tests (difficult). Using both software, the specialists also evaluated whether the glaucoma damage had progressed and if treatment change was needed. One month later, the same observers reevaluated the patients in a different order to determine intraobserver reproducibility. Intraobserver and interobserver agreement was estimated using κ statistics and Gwet second-order agreement coefficient. The agreement was compared with other factors. Of the 100 observed patients, half were male and all were white; the mean (SD) age was 69.7 (14.1) years. Intraobserver agreement was substantial to almost perfect for VF software (overall κ [95% CI], 0.59 [0.46-0.72] to 0.87 [0.79-0.96]) and similar for OCT software (overall κ [95% CI], 0.59 [0.46-0.71] to 0.85 [0.76-0.94]). Interobserver agreement among the 5 glaucoma specialists with the VF progression software was moderate (κ, 0.48; 95% CI, 0.41-0.55) and similar to OCT progression software (κ, 0.52; 95% CI, 0.44-0.59). Interobserver agreement was substantial in images classified as having no progression but only fair in those classified as having questionable glaucoma progression or glaucoma progression. Interobserver agreement was fair regarding questions about glaucoma progression (κ, 0.39; 95% CI, 0.32-0.48) and consideration about treatment changes (κ, 0.39; 95% CI, 0.32-0.48). The factors associated with agreement were the glaucoma stage and case difficulty. There was substantial intraobserver agreement but moderate interobserver agreement among glaucoma specialists using 2 glaucoma progression software packages. These data suggest that these glaucoma progression software packages are insufficient to obtain high interobserver agreement in both devices except in patients with no progression. The low agreement regarding progression or treatment changes suggests that both software programs used in isolation are insufficient for decision making.

  4. Software engineering and data management for automated payload experiment tool

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Provancha, Anna; Chattam, David

    1994-01-01

    The Microgravity Projects Office identified a need to develop a software package that will lead experiment developers through the development planning process, obtain necessary information, establish an electronic data exchange avenue, and allow easier manipulation/reformatting of the collected information. An MS-DOS compatible software package called the Automated Payload Experiment Tool (APET) has been developed and delivered. The objective of this task is to expand on the results of the APET work previously performed by University of Alabama in Huntsville (UAH) and provide versions of the software in a Macintosh and Windows compatible format. Appendix 1 science requirements document (SRD) Users Manual is attached.

  5. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  6. Comparison of Methods for Characterizing Nonideal Solute Self-Association by Sedimentation Equilibrium

    PubMed Central

    Scott, David J.; Winzor, Donald J.

    2009-01-01

    Abstract We have examined in detail analytical solutions of expressions for sedimentation equilibrium in the analytical ultracentrifuge to describe self-association under nonideal conditions. We find that those containing the radial dependence of total solute concentration that incorporate the Adams-Fujita assumption for composition-dependence of activity coefficients reveal potential shortcomings for characterizing such systems. Similar deficiencies are shown in the use of the NONLIN software incorporating the same assumption about the interrelationship between activity coefficients for monomer and polymer species. These difficulties can be overcome by iterative analyses incorporating expressions for the composition-dependence of activity coefficients predicted by excluded volume considerations. A recommendation is therefore made for the replacement of current software packages by programs that incorporate rigorous statistical-mechanical allowance for thermodynamic nonideality in sedimentation equilibrium distributions reflecting solute self-association. PMID:19651047

  7. Software package for modeling spin–orbit motion in storage rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyuzin, D. V., E-mail: d.zyuzin@fz-juelich.de

    2015-12-15

    A software package providing a graphical user interface for computer experiments on the motion of charged particle beams in accelerators, as well as analysis of obtained data, is presented. The software package was tested in the framework of the international project on electric dipole moment measurement JEDI (Jülich Electric Dipole moment Investigations). The specific features of particle spin motion imply the requirement to use a cyclic accelerator (storage ring) consisting of electrostatic elements, which makes it possible to preserve horizontal polarization for a long time. Computer experiments study the dynamics of 10{sup 6}–10{sup 9} particles in a beam during 10{supmore » 9} turns in an accelerator (about 10{sup 12}–10{sup 15} integration steps for the equations of motion). For designing an optimal accelerator structure, a large number of computer experiments on polarized beam dynamics are required. The numerical core of the package is COSY Infinity, a program for modeling spin–orbit dynamics.« less

  8. SEMModComp: An R Package for Calculating Likelihood Ratio Tests for Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Levy, Roy

    2010-01-01

    SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.

  9. The design, deployment, and testing of kriging models in GEOframe with SIK-0.9.8

    NASA Astrophysics Data System (ADS)

    Bancheri, Marialaura; Serafin, Francesco; Bottazzi, Michele; Abera, Wuletawu; Formetta, Giuseppe; Rigon, Riccardo

    2018-06-01

    This work presents a software package for the interpolation of climatological variables, such as temperature and precipitation, using kriging techniques. The purposes of the paper are (1) to present a geostatistical software that is easy to use and easy to plug in to a hydrological model; (2) to provide a practical example of an accurately designed software from the perspective of reproducible research; and (3) to demonstrate the goodness of the results of the software and so have a reliable alternative to other, more traditional tools. A total of 11 types of theoretical semivariograms and four types of kriging were implemented and gathered into Object Modeling System-compliant components. The package provides real-time optimization for semivariogram and kriging parameters. The software was tested using a year's worth of hourly temperature readings and a rain storm event (11 h) recorded in 2008 and retrieved from 97 meteorological stations in the Isarco River basin, Italy. For both the variables, good interpolation results were obtained and then compared to the results from the R package gstat.

  10. Advanced Software Development Workstation Project

    NASA Technical Reports Server (NTRS)

    Lee, Daniel

    1989-01-01

    The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration.

  11. DR-Integrator: a new analytic tool for integrating DNA copy number and gene expression data.

    PubMed

    Salari, Keyan; Tibshirani, Robert; Pollack, Jonathan R

    2010-02-01

    DNA copy number alterations (CNA) frequently underlie gene expression changes by increasing or decreasing gene dosage. However, only a subset of genes with altered dosage exhibit concordant changes in gene expression. This subset is likely to be enriched for oncogenes and tumor suppressor genes, and can be identified by integrating these two layers of genome-scale data. We introduce DNA/RNA-Integrator (DR-Integrator), a statistical software tool to perform integrative analyses on paired DNA copy number and gene expression data. DR-Integrator identifies genes with significant correlations between DNA copy number and gene expression, and implements a supervised analysis that captures genes with significant alterations in both DNA copy number and gene expression between two sample classes. DR-Integrator is freely available for non-commercial use from the Pollack Lab at http://pollacklab.stanford.edu/ and can be downloaded as a plug-in application to Microsoft Excel and as a package for the R statistical computing environment. The R package is available under the name 'DRI' at http://cran.r-project.org/. An example analysis using DR-Integrator is included as supplemental material. Supplementary data are available at Bioinformatics online.

  12. SWATH2stats: An R/Bioconductor Package to Process and Convert Quantitative SWATH-MS Proteomics Data for Downstream Analysis Tools.

    PubMed

    Blattmann, Peter; Heusel, Moritz; Aebersold, Ruedi

    2016-01-01

    SWATH-MS is an acquisition and analysis technique of targeted proteomics that enables measuring several thousand proteins with high reproducibility and accuracy across many samples. OpenSWATH is popular open-source software for peptide identification and quantification from SWATH-MS data. For downstream statistical and quantitative analysis there exist different tools such as MSstats, mapDIA and aLFQ. However, the transfer of data from OpenSWATH to the downstream statistical tools is currently technically challenging. Here we introduce the R/Bioconductor package SWATH2stats, which allows convenient processing of the data into a format directly readable by the downstream analysis tools. In addition, SWATH2stats allows annotation, analyzing the variation and the reproducibility of the measurements, FDR estimation, and advanced filtering before submitting the processed data to downstream tools. These functionalities are important to quickly analyze the quality of the SWATH-MS data. Hence, SWATH2stats is a new open-source tool that summarizes several practical functionalities for analyzing, processing, and converting SWATH-MS data and thus facilitates the efficient analysis of large-scale SWATH/DIA datasets.

  13. The STARLINK software collection

    NASA Astrophysics Data System (ADS)

    Penny, A. J.; Wallace, P. T.; Sherman, J. C.; Terret, D. L.

    1993-12-01

    A demonstration will be given of some recent Starlink software. STARLINK is: a network of computers used by UK astronomers; a collection of programs for the calibration and analysis of astronomical data; a team of people giving hardware, software and administrative support. The Starlink Project has been in operation since 1980 to provide UK astronomers with interactive image processing and data reduction facilities. There are now Starlink computer systems at 25 UK locations, serving about 1500 registered users. The Starlink software collection now has about 25 major packages covering a wide range of astronomical data reduction and analysis techniques, as well as many smaller programs and utilities. At the core of most of the packages is a common `software environment', which provides many of the functions which applications need and offers standardized methods of structuring and accessing data. The software environment simplifies programming and support, and makes it easy to use different packages for different stages of the data reduction. Users see a consistent style, and can mix applications without hitting problems of differing data formats. The Project group coordinates the writing and distribution of this software collection, which is Unix based. Outside the UK, Starlink is used at a large number of places, which range from installations at major UK telescopes, which are Starlink-compatible and managed like Starlink sites, to individuals who run only small parts of the Starlink software collection.

  14. ELF/VLF/LF Radio Propagation and Systems Aspects (La Propagation des Ondes Radio ELF/VLF/LF et les Aspects Systemes)

    DTIC Science & Technology

    1993-05-01

    limitation of the software package would not allow DATE/I’ME FREQUENCY (kHz) the program to run over 2359 to 0001 UT. This was 18.1 19.0 21.4 24.0...Capability (LWPC), software package devel- oped at NOSC (FERGUSON et al 1989) and adapted by us to the Macintosh personal computer. We find that this... software works very well. Our investigations are to I evaluate and devise geophysical models to be used with . LWPC in assessing VLF communications and

  15. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates.

    PubMed

    Xia, Li C; Steele, Joshua A; Cram, Jacob A; Cardon, Zoe G; Simmons, Sheri L; Vallino, Joseph J; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa.

  16. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates

    PubMed Central

    2011-01-01

    Background The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. Results We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. Conclusions The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa. PMID:22784572

  17. A RESEARCH DATABASE FOR IMPROVED DATA MANAGEMENT AND ANALYSIS IN LONGITUDINAL STUDIES

    PubMed Central

    BIELEFELD, ROGER A.; YAMASHITA, TOYOKO S.; KEREKES, EDWARD F.; ERCANLI, EHAT; SINGER, LYNN T.

    2014-01-01

    We developed a research database for a five-year prospective investigation of the medical, social, and developmental correlates of chronic lung disease during the first three years of life. We used the Ingres database management system and the Statit statistical software package. The database includes records containing 1300 variables each, the results of 35 psychological tests, each repeated five times (providing longitudinal data on the child, the parents, and behavioral interactions), both raw and calculated variables, and both missing and deferred values. The four-layer menu-driven user interface incorporates automatic activation of complex functions to handle data verification, missing and deferred values, static and dynamic backup, determination of calculated values, display of database status, reports, bulk data extraction, and statistical analysis. PMID:7596250

  18. Editorial: Bayesian benefits for child psychology and psychiatry researchers.

    PubMed

    Oldehinkel, Albertine J

    2016-09-01

    For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field. © 2016 Association for Child and Adolescent Mental Health.

  19. Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water

    DOE PAGES

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...

    2016-08-13

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  20. Integrated data management for clinical studies: automatic transformation of data models with semantic annotations for principal investigators, data managers and statisticians.

    PubMed

    Dugas, Martin; Dugas-Breit, Susanne

    2014-01-01

    Design, execution and analysis of clinical studies involves several stakeholders with different professional backgrounds. Typically, principle investigators are familiar with standard office tools, data managers apply electronic data capture (EDC) systems and statisticians work with statistics software. Case report forms (CRFs) specify the data model of study subjects, evolve over time and consist of hundreds to thousands of data items per study. To avoid erroneous manual transformation work, a converting tool for different representations of study data models was designed. It can convert between office format, EDC and statistics format. In addition, it supports semantic annotations, which enable precise definitions for data items. A reference implementation is available as open source package ODMconverter at http://cran.r-project.org.

  1. Computing Interactions Of Free-Space Radiation With Matter

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.

    1995-01-01

    High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.

  2. Tolerancing aspheres based on manufacturing knowledge

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Kokot, S.; Fuchs, U.

    2017-10-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  3. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  4. Gbm.auto: A software tool to simplify spatial modelling and Marine Protected Area planning

    PubMed Central

    Officer, Rick; Clarke, Maurice; Reid, David G.; Brophy, Deirdre

    2017-01-01

    Boosted Regression Trees. Excellent for data-poor spatial management but hard to use Marine resource managers and scientists often advocate spatial approaches to manage data-poor species. Existing spatial prediction and management techniques are either insufficiently robust, struggle with sparse input data, or make suboptimal use of multiple explanatory variables. Boosted Regression Trees feature excellent performance and are well suited to modelling the distribution of data-limited species, but are extremely complicated and time-consuming to learn and use, hindering access for a wide potential user base and therefore limiting uptake and usage. BRTs automated and simplified for accessible general use with rich feature set We have built a software suite in R which integrates pre-existing functions with new tailor-made functions to automate the processing and predictive mapping of species abundance data: by automating and greatly simplifying Boosted Regression Tree spatial modelling, the gbm.auto R package suite makes this powerful statistical modelling technique more accessible to potential users in the ecological and modelling communities. The package and its documentation allow the user to generate maps of predicted abundance, visualise the representativeness of those abundance maps and to plot the relative influence of explanatory variables and their relationship to the response variables. Databases of the processed model objects and a report explaining all the steps taken within the model are also generated. The package includes a previously unavailable Decision Support Tool which combines estimated escapement biomass (the percentage of an exploited population which must be retained each year to conserve it) with the predicted abundance maps to generate maps showing the location and size of habitat that should be protected to conserve the target stocks (candidate MPAs), based on stakeholder priorities, such as the minimisation of fishing effort displacement. Gbm.auto for management in various settings By bridging the gap between advanced statistical methods for species distribution modelling and conservation science, management and policy, these tools can allow improved spatial abundance predictions, and therefore better management, decision-making, and conservation. Although this package was built to support spatial management of a data-limited marine elasmobranch fishery, it should be equally applicable to spatial abundance modelling, area protection, and stakeholder engagement in various scenarios. PMID:29216310

  5. Power estimation using simulations for air pollution time-series studies

    PubMed Central

    2012-01-01

    Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599

  6. Power estimation using simulations for air pollution time-series studies.

    PubMed

    Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt

    2012-09-20

    Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.

  7. The GRIDView Visualization Package

    NASA Astrophysics Data System (ADS)

    Kent, B. R.

    2011-07-01

    Large three-dimensional data cubes, catalogs, and spectral line archives are increasingly important elements of the data discovery process in astronomy. Visualization of large data volumes is of vital importance for the success of large spectral line surveys. Examples of data reduction utilizing the GRIDView software package are shown. The package allows users to manipulate data cubes, extract spectral profiles, and measure line properties. The package and included graphical user interfaces (GUIs) are designed with pipeline infrastructure in mind. The software has been used with great success analyzing spectral line and continuum data sets obtained from large radio survey collaborations. The tools are also important for multi-wavelength cross-correlation studies and incorporate Virtual Observatory client applications for overlaying database information in real time as cubes are examined by users.

  8. Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)

    NASA Astrophysics Data System (ADS)

    Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee

    2010-12-01

    Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review available statistical seismology software packages.

  9. Design and Implementation of a C++ Software Package to scan for and parse Tsunami Messages issued by the Tsunami Warning Centers for Operational use at the Pacific Tsunami Warning Center

    NASA Astrophysics Data System (ADS)

    Sardina, V.

    2012-12-01

    The US Tsunami Warning Centers (TWCs) have traditionally generated their tsunami message products primarily as blocks of text then tagged with headers that identify them on each particular communications' (comms) circuit. Each warning center has a primary area of responsibility (AOR) within which it has an authoritative role regarding parameters such as earthquake location and magnitude. This means that when a major tsunamigenic event occurs the other warning centers need to quickly access the earthquake parameters issued by the authoritative warning center before issuing their message products intended for customers in their own AOR. Thus, within the operational context of the TWCs the scientists on duty have an operational need to access the information contained in the message products issued by other warning centers as quickly as possible. As a solution to this operational problem we designed and implemented a C++ software package that allows scanning for and parsing the entire suite of tsunami message products issued by the Pacific Tsunami Warning Center (PTWC), the West Coast and Alaska Tsunami Warning Center (WCATWC), and the Japan Meteorological Agency (JMA). The scanning and parsing classes composing the resulting C++ software package allow parsing both non-official message products(observatory messages) routinely issued by the TWCs, and all official tsunami message products such as tsunami advisories, watches, and warnings. This software package currently allows scientists on duty at the PTWC to automatically retrieve the parameters contained in tsunami messages issued by WCATWC, JMA, or PTWC itself. Extension of the capabilities of the classes composing the software package would make it possible to generate XML and CAP compliant versions of the TWCs' message products until new messaging software natively adds this capabilities. Customers who receive the TWCs' tsunami message products could also use the package to automatically retrieve information from messages sent via any text-based communications' circuit currently used by the TWCs to disseminate their tsunami message products.

  10. 21 CFR 801.50 - Labeling requirements for stand-alone software.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Labeling requirements for stand-alone software....50 Labeling requirements for stand-alone software. (a) Stand-alone software that is not distributed... in packaged form, stand-alone software regulated as a medical device must provide its unique device...

  11. Community-driven computational biology with Debian Linux.

    PubMed

    Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles

    2010-12-21

    The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.

  12. Predictive Utility of Marketed Volumetric Software Tools in Subjects at Risk for Alzheimer Disease: Do Regions Outside the Hippocampus Matter?

    PubMed

    Tanpitukpongse, T P; Mazurowski, M A; Ikhena, J; Petrella, J R

    2017-03-01

    Alzheimer disease is a prevalent neurodegenerative disease. Computer assessment of brain atrophy patterns can help predict conversion to Alzheimer disease. Our aim was to assess the prognostic efficacy of individual-versus-combined regional volumetrics in 2 commercially available brain volumetric software packages for predicting conversion of patients with mild cognitive impairment to Alzheimer disease. Data were obtained through the Alzheimer's Disease Neuroimaging Initiative. One hundred ninety-two subjects (mean age, 74.8 years; 39% female) diagnosed with mild cognitive impairment at baseline were studied. All had T1-weighted MR imaging sequences at baseline and 3-year clinical follow-up. Analysis was performed with NeuroQuant and Neuroreader. Receiver operating characteristic curves assessing the prognostic efficacy of each software package were generated by using a univariable approach using individual regional brain volumes and 2 multivariable approaches (multiple regression and random forest), combining multiple volumes. On univariable analysis of 11 NeuroQuant and 11 Neuroreader regional volumes, hippocampal volume had the highest area under the curve for both software packages (0.69, NeuroQuant; 0.68, Neuroreader) and was not significantly different ( P > .05) between packages. Multivariable analysis did not increase the area under the curve for either package (0.63, logistic regression; 0.60, random forest NeuroQuant; 0.65, logistic regression; 0.62, random forest Neuroreader). Of the multiple regional volume measures available in FDA-cleared brain volumetric software packages, hippocampal volume remains the best single predictor of conversion of mild cognitive impairment to Alzheimer disease at 3-year follow-up. Combining volumetrics did not add additional prognostic efficacy. Therefore, future prognostic studies in mild cognitive impairment, combining such tools with demographic and other biomarker measures, are justified in using hippocampal volume as the only volumetric biomarker. © 2017 by American Journal of Neuroradiology.

  13. DEVELOPMENT OF A PORTABLE SOFTWARE LANGUAGE FOR PHYSIOLOGICALLY-BASED PHARMACOKINETIC (PBPK) MODELS

    EPA Science Inventory

    The PBPK modeling community has had a long-standing problem with modeling software compatibility. The numerous software packages used for PBPK models are, at best, minimally compatible. This creates problems ranging from model obsolescence due to software support discontinuation...

  14. Common Ada (tradename) Missile Package (CAMP) Project. Missile Software Parts. Volume 8. Detail Design Document

    DTIC Science & Technology

    1988-03-01

    PACKAGE BODY ) TLCSC P661 (CATALOG #P106-0) This package contains the CAMP parts required to do the vaypoint steering portion of navigation. The...3.3.4.1.6 PROCESSING The following describes the processing performed by this part: package body WaypointSteering is package body ...Steering_Vector_Operations is separate; package body Steering_Vector_Operations_with_Arcsin is separate; procedure Compute Turn_Angle_and Direction (UnitNormal C

  15. Assessment of left ventricular mechanical dyssynchrony by phase analysis of gated-SPECT myocardial perfusion imaging and tissue Doppler imaging: comparison between QGS and ECTb software packages.

    PubMed

    Rastgou, Fereydoon; Shojaeifard, Maryam; Amin, Ahmad; Ghaedian, Tahereh; Firoozabadi, Hasan; Malek, Hadi; Yaghoobi, Nahid; Bitarafan-Rajabi, Ahmad; Haghjoo, Majid; Amouzadeh, Hedieh; Barati, Hossein

    2014-12-01

    Recently, the phase analysis of gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) has become feasible via several software packages for the evaluation of left ventricular mechanical dyssynchrony. We compared two quantitative software packages, quantitative gated SPECT (QGS) and Emory cardiac toolbox (ECTb), with tissue Doppler imaging (TDI) as the conventional method for the evaluation of left ventricular mechanical dyssynchrony. Thirty-one patients with severe heart failure (ejection fraction ≤35%) and regular heart rhythm, who referred for gated-SPECT MPI, were enrolled. TDI was performed within 3 days after MPI. Dyssynchrony parameters derived from gated-SPECT MPI were analyzed by QGS and ECTb and were compared with the Yu index and septal-lateral wall delay measured by TDI. QGS and ECTb showed a good correlation for assessment of phase histogram bandwidth (PHB) and phase standard deviation (PSD) (r = 0.664 and r = 0.731, P < .001, respectively). However, the mean value of PHB and PSD by ECTb was significantly higher than that of QGS. No significant correlation was found between ECTb and QGS and the Yu index. Nevertheless, PHB, PSD, and entropy derived from QGS revealed a significant (r = 0.424, r = 0.478, r = 0.543, respectively; P < .02) correlation with septal-lateral wall delay. Despite a good correlation between QGS and ECTb software packages, different normal cut-off values of PSD and PHB should be defined for each software package. There was only a modest correlation between phase analysis of gated-SPECT MPI and TDI data, especially in the population of heart failure patients with both narrow and wide QRS complex.

  16. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  17. MOSAIC: Software for creating mosaics from collections of images

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Gezari, D. Y.

    1992-01-01

    We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.

  18. FTOOLS: A FITS Data Processing and Analysis Software Package

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.

    FTOOLS, a highly modular collection of over 110 utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Science Archive Research Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities specific to high energy astrophysics data sets used for the ASCA, ROSAT, GRO, and XTE missions. A core set of FTOOLS providing support for generic FITS data processing, FITS image analysis and timing analysis can easily be split out of the full software package for users not needing the high energy astrophysics mission utilities. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and \\fortran to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  19. Courseware Review.

    ERIC Educational Resources Information Center

    Risley, John, Ed.

    1988-01-01

    Compares the features of the sonic rangers available from HRM Software, MICROMEASUREMENTS, NAGAWTIS Software Research, and PASCO Scientific for demonstrations and experiments in mechanics. Presents the advantages of the sonic rangers and the typical graphics displayed by each software package. (YP)

  20. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

Top