Sample records for perl yoram avnimelech

  1. Bhartia Receives 2012 Yoram J. Kaufman Unselfish Cooperation in Research Award: Response

    NASA Astrophysics Data System (ADS)

    Bhartia, Pawan K.

    2013-11-01

    It was an honor and pleasure to receive the 2012 Yoram J. Kaufman Unselfish Cooperation in Research Award from the Atmospheric Sciences Section of the American Geophysical Union. Yoram and I joined NASA around the same time and were colleagues at NASA Goddard Space Flight Center until his tragic death in 2006 in a bicycle accident inside the GSFC campus. The award named after him not only honors Yoram's life and work, but it is also one of the rare scientific awards given for collaboration and teamwork, in which Yoram excelled.

  2. Oltmans Receives 2013 Yoram J. Kaufman Unselfish Cooperation in Research Award: Response

    NASA Astrophysics Data System (ADS)

    Oltmans, Samuel J.

    2014-08-01

    I am humbled to receive the Yoram J. Kaufman Unselfish Cooperation in Research Award. To be included in the distinguished company of the previous award recipients is an honor that is deeply gratifying.

  3. A Comprehensive Theory of Algorithms for Wireless Networks and Mobile Systems

    DTIC Science & Technology

    2016-06-08

    David Peleg. Nonuniform SINR+Voronoi Diagrams are Effectively Uniform. In Yoram Moses, editor, Distributed Computing: 29th International Symposium...in Computer Science, page 559. Springer, 2014. [16] Erez Kantor, Zvi Lotker, Merav Parter, and David Peleg. Nonuniform sINR+Voronoi dia- grams are...Merav Parter, and David Peleg. Nonuniform SINR+Voronoi diagrams are effectively uniform. In Yoram Moses, editor, Distributed Computing - 29th

  4. ICTNET at Web Track 2012 Ad-hoc Task

    DTIC Science & Technology

    2012-11-01

    Model and use it as baseline this year. 3.2 Learning to rank Learning to rank (LTR) introduces machine learning to retrieval ranking problem. It...Yoram Singer. An efficient boosting algorithm  for  combining preferences [J]. The Journal of  Machine   Learning  Research. 2003. 

  5. Oltmans Receives 2013 Yoram J. Kaufman Unselfish Cooperation in Research Award: Citation

    NASA Astrophysics Data System (ADS)

    Thompson, Anne

    2014-08-01

    Samuel "Sam" Oltmans, an AGU Fellow since 2007, was head of the National Oceanic and Atmospheric Administration's (NOAA) Global Monitoring Division Ozone and Water Vapor group for more than 30 years. He is currently a senior research scientist at the Cooperative Institute for Research in the Environmental Sciences (CIRES) of the University of Colorado at Boulder.

  6. The Role of Aerosols in Cloud Growth, Suppression, and Precipitation: Yoram Kaufman and his Contributions

    NASA Technical Reports Server (NTRS)

    King, Michael D.

    2006-01-01

    Aerosol particles are produced in the earth's atmosphere through both natural as well as manmade processes, and contribute profoundly to the (i) formation and characteristics of clouds, (ii) lifetime of clouds, (iii) optical and microphysical properties of clouds, (iv) human health through effects on air quality and the size of particulates as well as vectors for transport of pathogens, (v) climate response and feedbacks, (vi) precipitation, and (vii) harmful algal blooms. Without aerosol particles in the Earth's atmosphere, there would be no fogs, no clouds, ,no mists, and probably no rain, as noted as far back as 1880 by Scottish physicist John Aitken. With the modern development of instrumentation, both groundbased, airborne, and satellite-based, much progress has been made in linkng phenomena and processes together, and putting regional air quality characteristics and hypothesized cloud response into closer scrutiny and linkages. In h s presentation I will summarize the wide ranging contributions that Yoram Kaufman has made in ground-based (AERONET), aircraft field campaigns (such as SCAR-B and TARFOX), and, especially, satellite remote sensing (Landsat, MODIS, POLDER) to shed new light on this broad ranging and interdisciplinary field of cloud-aerosol-precipitation interactions.

  7. Perls with Gloria Re-reviewed: Gestalt Techniques and Perls's Practices.

    ERIC Educational Resources Information Center

    Dolliver, Robert H.

    1991-01-01

    Reviews the filmed interview with Gloria by Perls (1965) which demonstrated some standard Gestalt therapy techniques and presents examples from film. Identifies discrepancies between Perls's description of Gestalt therapeutic processes and his interview behavior. Makes reflections on the inherent difficulties with the concept of the emerging…

  8. autokonf - A Configuration Script Generator Implemented in Perl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reus, J F

    This paper discusses configuration scripts in general and the scripting language issues involved. A brief description of GNU autoconf is provided along with a contrasting overview of autokonf, a configuration script generator implemented in Perl, whose macros are implemented in Perl, generating a configuration script in Perl. It is very portable, easily extensible, and readily mastered.

  9. Placing pain on the sensory map: classic papers by Ed Perl and colleagues.

    PubMed

    Mason, Peggy

    2007-03-01

    This essay looks at two papers published by Ed Perl and co-workers that identified specifically nociceptive neurons in the periphery and superficial dorsal horn. Bessou P and Perl ER. Response of cutaneous sensory units with unmyelinated fibers to noxious stimuli. J Neurophysiol 32: 1025-1043 1969. Christensen BN and Perl ER. Spinal neurons specifically excited by noxious or thermal stimuli: marginal zone of the dorsal horn. J Neurophysiol 33: 293-307 1970.

  10. The Bio-Community Perl toolkit for microbial ecology.

    PubMed

    Angly, Florent E; Fields, Christopher J; Tyson, Gene W

    2014-07-01

    The development of bioinformatic solutions for microbial ecology in Perl is limited by the lack of modules to represent and manipulate microbial community profiles from amplicon and meta-omics studies. Here we introduce Bio-Community, an open-source, collaborative toolkit that extends BioPerl. Bio-Community interfaces with commonly used programs using various file formats, including BIOM, and provides operations such as rarefaction and taxonomic summaries. Bio-Community will help bioinformaticians to quickly piece together custom analysis pipelines and develop novel software. Availability an implementation: Bio-Community is cross-platform Perl code available from http://search.cpan.org/dist/Bio-Community under the Perl license. A readme file describes software installation and how to contribute. © The Author 2014. Published by Oxford University Press.

  11. Perl Embedded in PTC's Pro/ENGINEER, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2003-12-22

    Pro-PERL (AKA Pro/PERL) is a Perl extension to the PTC Pro/TOOLKIT API to the PTC Pro/ENGINEER CAD application including an embedded interpreter. It can be used to automate and customize Pro/ENGINEER, create Vendor Neutral Archive (VNA) format files and re-create CAD models from the VNA files. This has applications in sanitizing classified CAD models created in a classified environment for transfer to an open environment, creating template models for modification to finished models by non-expert users, and transfer of design intent data to other modeling technologies.

  12. TkPl_SU: An Open-source Perl Script Builder for Seismic Unix

    NASA Astrophysics Data System (ADS)

    Lorenzo, J. M.

    2017-12-01

    TkPl_SU (beta) is a graphical user interface (GUI) to select parameters for Seismic Unix (SU) modules. Seismic Unix (Stockwell, 1999) is a widely distributed free software package for processing seismic reflection and signal processing. Perl/Tk is a mature, well-documented and free object-oriented graphical user interface for Perl. In a classroom environment, shell scripting of SU modules engages students and helps focus on the theoretical limitations and strengths of signal processing. However, complex interactive processing stages, e.g., selection of optimal stacking velocities, killing bad data traces, or spectral analysis requires advanced flows beyond the scope of introductory classes. In a research setting, special functionality from other free seismic processing software such as SioSeis (UCSD-NSF) can be incorporated readily via an object-oriented style to programming. An object oriented approach is a first step toward efficient extensible programming of multi-step processes, and a simple GUI simplifies parameter selection and decision making. Currently, in TkPl_SU, Perl 5 packages wrap 19 of the most common SU modules that are used in teaching undergraduate and first-year graduate student classes (e.g., filtering, display, velocity analysis and stacking). Perl packages (classes) can advantageously add new functionality around each module and clarify parameter names for easier usage. For example, through the use of methods, packages can isolate the user from repetitive control structures, as well as replace the names of abbreviated parameters with self-describing names. Moose, an extension of the Perl 5 object system, greatly facilitates an object-oriented style. Perl wrappers are self-documenting via Perl programming document markup language.

  13. Myles A. Steiner | NREL

    Science.gov Websites

    ; Emmett E. Perl, John Simon, Daniel J. Friedman, Nikhil Jain, Paul Sharps, Claiborne McPheeters, Yukun Sun , Kevin L. Schulte, Ryan M. France, William E. McMahon, Emmett E. Perl, Daniel J. Friedman, Journal of ; E.E. Perl, D. Kuciauskas, J. Simon, D.J. Friedman, M.A. Steiner, Journal of Applied Physics, 122

  14. A Study of Kofi Annan’s Leadership as the United Nations Secretary General and His Impact on the Implementation and Success of a Sub-Saharan Africa Agenda

    DTIC Science & Technology

    2009-12-11

    some tasks that involve strict adherence to safety demand directive leadership behavior. When a leader wishes to introduce a new marketing strategy , it...NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Major Yoram M. Ngwira, Malawi Defence Force 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) U.S. Army Command and General Staff College ATTN: ATZL-SWD-GD Fort Leavenworth

  15. The Navy Enlistment Field Marketing Experiment. Volume 7. The Wharton Administered Navy Tracking Survey: A Segmentation Approach

    DTIC Science & Technology

    1982-10-15

    the two may interact. L1 *1 -35- REFERENCES [1] Arnold, Stephen J. (1979), "A Test for Clusters," Journal of Marketing Research , November, pp 545-551...of Marketing Research , August, pp 405-412. APPENDIX A RESULTS OF FACTOR ANALYSIS OF LIFE GOALS . . . . . -37- Ft-M AnSIS CF FEq. LIFE GOALS GM"L...Volume 5, Pre-intervention Recruiting Environ- ment, 1981. [9] Wind, Yoram (1978), "Issues and Advances in Segmentation Research," Journal of Marketing

  16. Thermoreception and Nociception of the Skin: A Classic Paper of Bessou and Perl and Analyses of Thermal Sensitivity during a Student Laboratory Exercise

    ERIC Educational Resources Information Center

    Kuhtz-Buschbeck, Johann P.; Andresen, Wiebke; Gobel, Stephan; Gilster, Rene; Stick, Carsten

    2010-01-01

    About four decades ago, Perl and collaborators were the first ones who unambiguously identified specifically nociceptive neurons in the periphery. In their classic work, they recorded action potentials from single C-fibers of a cutaneous nerve in cats while applying carefully graded stimuli to the skin (Bessou P, Perl ER. Response of cutaneous…

  17. Analytic reconstruction of magnetic resonance imaging signal obtained from a periodic encoding field.

    PubMed

    Rybicki, F J; Hrovat, M I; Patz, S

    2000-09-01

    We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.

  18. BIO::Phylo-phyloinformatic analysis using perl.

    PubMed

    Vos, Rutger A; Caravas, Jason; Hartmann, Klaas; Jensen, Mark A; Miller, Chase

    2011-02-27

    Phyloinformatic analyses involve large amounts of data and metadata of complex structure. Collecting, processing, analyzing, visualizing and summarizing these data and metadata should be done in steps that can be automated and reproduced. This requires flexible, modular toolkits that can represent, manipulate and persist phylogenetic data and metadata as objects with programmable interfaces. This paper presents Bio::Phylo, a Perl5 toolkit for phyloinformatic analysis. It implements classes and methods that are compatible with the well-known BioPerl toolkit, but is independent from it (making it easy to install) and features a richer API and a data model that is better able to manage the complex relationships between different fundamental data and metadata objects in phylogenetics. It supports commonly used file formats for phylogenetic data including the novel NeXML standard, which allows rich annotations of phylogenetic data to be stored and shared. Bio::Phylo can interact with BioPerl, thereby giving access to the file formats that BioPerl supports. Many methods for data simulation, transformation and manipulation, the analysis of tree shape, and tree visualization are provided. Bio::Phylo is composed of 59 richly documented Perl5 modules. It has been deployed successfully on a variety of computer architectures (including various Linux distributions, Mac OS X versions, Windows, Cygwin and UNIX-like systems). It is available as open source (GPL) software from http://search.cpan.org/dist/Bio-Phylo.

  19. BIO::Phylo-phyloinformatic analysis using perl

    PubMed Central

    2011-01-01

    Background Phyloinformatic analyses involve large amounts of data and metadata of complex structure. Collecting, processing, analyzing, visualizing and summarizing these data and metadata should be done in steps that can be automated and reproduced. This requires flexible, modular toolkits that can represent, manipulate and persist phylogenetic data and metadata as objects with programmable interfaces. Results This paper presents Bio::Phylo, a Perl5 toolkit for phyloinformatic analysis. It implements classes and methods that are compatible with the well-known BioPerl toolkit, but is independent from it (making it easy to install) and features a richer API and a data model that is better able to manage the complex relationships between different fundamental data and metadata objects in phylogenetics. It supports commonly used file formats for phylogenetic data including the novel NeXML standard, which allows rich annotations of phylogenetic data to be stored and shared. Bio::Phylo can interact with BioPerl, thereby giving access to the file formats that BioPerl supports. Many methods for data simulation, transformation and manipulation, the analysis of tree shape, and tree visualization are provided. Conclusions Bio::Phylo is composed of 59 richly documented Perl5 modules. It has been deployed successfully on a variety of computer architectures (including various Linux distributions, Mac OS X versions, Windows, Cygwin and UNIX-like systems). It is available as open source (GPL) software from http://search.cpan.org/dist/Bio-Phylo PMID:21352572

  20. BEAM DYNAMICS SIMULATIONS FOR A DC GUN BASED INJECTOR FOR PERL.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZHOU,F.; BEN-ZVI,I.; WANG,X.J.

    2001-06-18

    The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory (BNL) is considering an upgrade based on the Photoinjected Energy Recovering Linac (PERL). The various injector schemes for this machine are being extensively investigated at BNL. One of the possible options is photocathode DC gun. The schematic layout of a PERL DC gun based injector and its preliminary beam dynamics are presented in this paper. The transverse and longitudinal emittance of photo-electron beam were optimized for a DC field 500 kV.

  1. The Geoinformatica free and open source software stack

    NASA Astrophysics Data System (ADS)

    Jolma, A.

    2012-04-01

    The Geoinformatica free and open source software (FOSS) stack is based mainly on three established FOSS components, namely GDAL, GTK+, and Perl. GDAL provides access to a very large selection of geospatial data formats and data sources, a generic geospatial data model, and a large collection of geospatial analytical and processing functionality. GTK+ and the Cairo graphics library provide generic graphics and graphical user interface capabilities. Perl is a programming language, for which there is a very large set of FOSS modules for a wide range of purposes and which can be used as an integrative tool for building applications. In the Geoinformatica stack, data storages such as FOSS RDBMS PostgreSQL with its geospatial extension PostGIS can be used below the three above mentioned components. The top layer of Geoinformatica consists of a C library and several Perl modules. The C library comprises a general purpose raster algebra library, hydrological terrain analysis functions, and visualization code. The Perl modules define a generic visualized geospatial data layer and subclasses for raster and vector data and graphs. The hydrological terrain functions are already rather old and they suffer for example from the requirement of in-memory rasters. Newer research conducted using the platform include basic geospatial simulation modeling, visualization of ecological data, linking with a Bayesian network engine for spatial risk assessment in coastal areas, and developing standards-based distributed water resources information systems in Internet. The Geoinformatica stack constitutes a platform for geospatial research, which is targeted towards custom analytical tools, prototyping and linking with external libraries. Writing custom analytical tools is supported by the Perl language and the large collection of tools that are available especially in GDAL and Perl modules. Prototyping is supported by the GTK+ library, the GUI tools, and the support for object-oriented programming in Perl. New feature types, geospatial layer classes, and tools as extensions with specific features can be defined, used, and studied. Linking with external libraries is possible using the Perl foreign function interface tools or with generic tools such as Swig. We are interested in implementing and testing linking Geoinformatica with existing or new more specific hydrological FOSS.

  2. Martin Perl and the Tau Lepton

    Science.gov Websites

    ; Technology in Experimental Particle Physics", Winter 1995, Vol. 25, No. 4, pages 4 - 27 Perl compares you to non-federal websites. Their policies may differ from this site. Website Policies/Important

  3. IRACproc: IRAC Post-BCD Processing

    NASA Astrophysics Data System (ADS)

    Schuster, Mike; Marengo, Massimo; Patten, Brian

    2012-09-01

    IRACproc is a software suite that facilitates the co-addition of dithered or mapped Spitzer/IRAC data to make them ready for further analysis with application to a wide variety of IRAC observing programs. The software runs within PDL, a numeric extension for Perl available from pdl.perl.org, and as stand alone perl scripts. In acting as a wrapper for the Spitzer Science Center's MOPEX software, IRACproc improves the rejection of cosmic rays and other transients in the co-added data. In addition, IRACproc performs (optional) Point Spread Function (PSF) fitting, subtraction, and masking of saturated stars.

  4. Perl Extension to the Bproc Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunau, Daryl W.

    2004-06-07

    The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less

  5. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  6. Logs Perl Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, R. K.

    2007-04-04

    A perl module designed to read and parse the voluminous set of event or accounting log files produced by a Portable Batch System (PBS) server. This module can filter on date-time and/or record type. The data can be returned in a variety of formats.

  7. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  8. A Taxonomic Approach to the Gestalt Theory of Perls

    ERIC Educational Resources Information Center

    Raming, Henry E.; Frey, David H.

    1974-01-01

    This study applied content analysis and cluster analysis to the ideas of Fritz Perls to develop a taxonomy of Gestalt processes and goals. Summaries of the typal groups or clusters were written and the implications of taxonomic research in counseling discussed. (Author)

  9. Values in Fritz Perls's Gestalt Therapy: On the Dangers of Half-Truths.

    ERIC Educational Resources Information Center

    Cadwallader, Eva H.

    1984-01-01

    Examines some of the values in Perls's theory of psychotherapy, which his Gestalt Prayer epitomizes. Argues that at least five of the major value claims presupposed by his psychotherapeutic theory and practice are in fact dangerous half-truths. (JAC)

  10. Exceptional collections in surface-like categories

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. G.

    2017-09-01

    We provide a categorical framework for recent results of Markus Perling's on the combinatorics of exceptional collections on numerically rational surfaces. Using it we simplify and generalize some of Perling's results as well as Vial's criterion for the existence of a numerical exceptional collection. Bibliography: 18 titles.

  11. The Perls Perversion

    ERIC Educational Resources Information Center

    Morris, Kenneth T.

    1975-01-01

    Author describes the Perls perversion, the ego-centered attitude that people should live up to one's expectations and satisfy one's whims, which causes interpersonal friction. RET helps people counteract this perversion by sensitizing them to their internalized irrational belief system, disputing it, and trying to behave rationally. Commentary by…

  12. Reflections on Fritz Perls's Gestalt Prayer.

    ERIC Educational Resources Information Center

    Dolliver, Robert H.

    1981-01-01

    Reviews the alternative forms of Fritz Perls' Gestalt Prayer, which was highly influential in the 1970s when individuality and personal rights were being explored. The prayer is subject to interpretation. Counselors should clarify for clients that ongoing relationships require a great deal of work. (JAC)

  13. Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.

    PubMed

    Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas

    2004-08-01

    The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.

  14. BpWrapper: BioPerl-based sequence and tree utilities for rapid prototyping of bioinformatics pipelines.

    PubMed

    Hernández, Yözen; Bernstein, Rocky; Pagan, Pedro; Vargas, Levy; McCaig, William; Ramrattan, Girish; Akther, Saymon; Larracuente, Amanda; Di, Lia; Vieira, Filipe G; Qiu, Wei-Gang

    2018-03-02

    Automated bioinformatics workflows are more robust, easier to maintain, and results more reproducible when built with command-line utilities than with custom-coded scripts. Command-line utilities further benefit by relieving bioinformatics developers to learn the use of, or to interact directly with, biological software libraries. There is however a lack of command-line utilities that leverage popular Open Source biological software toolkits such as BioPerl ( http://bioperl.org ) to make many of the well-designed, robust, and routinely used biological classes available for a wider base of end users. Designed as standard utilities for UNIX-family operating systems, BpWrapper makes functionality of some of the most popular BioPerl modules readily accessible on the command line to novice as well as to experienced bioinformatics practitioners. The initial release of BpWrapper includes four utilities with concise command-line user interfaces, bioseq, bioaln, biotree, and biopop, specialized for manipulation of molecular sequences, sequence alignments, phylogenetic trees, and DNA polymorphisms, respectively. Over a hundred methods are currently available as command-line options and new methods are easily incorporated. Performance of BpWrapper utilities lags that of precompiled utilities while equivalent to that of other utilities based on BioPerl. BpWrapper has been tested on BioPerl Release 1.6, Perl versions 5.10.1 to 5.25.10, and operating systems including Apple macOS, Microsoft Windows, and GNU/Linux. Release code is available from the Comprehensive Perl Archive Network (CPAN) at https://metacpan.org/pod/Bio::BPWrapper . Source code is available on GitHub at https://github.com/bioperl/p5-bpwrapper . BpWrapper improves on existing sequence utilities by following the design principles of Unix text utilities such including a concise user interface, extensive command-line options, and standard input/output for serialized operations. Further, dozens of novel methods for manipulation of sequences, alignments, and phylogenetic trees, unavailable in existing utilities (e.g., EMBOSS, Newick Utilities, and FAST), are provided. Bioinformaticians should find BpWrapper useful for rapid prototyping of workflows on the command-line without creating custom scripts for comparative genomics and other bioinformatics applications.

  15. UPIC: Perl scripts to determine the number of SSR markers to run

    USDA-ARS?s Scientific Manuscript database

    We have developed Perl Scripts for the cost-effective planning of fingerprinting and genotyping experiments. The UPIC scripts detect the best combination of polymorphic simple sequence repeat (SSR) markers and provide coefficients of the amount of information obtainable (number of alleles of patter...

  16. Counselors' Evaluation of Rogers-Perls-Ellis's Relationship Skills

    ERIC Educational Resources Information Center

    Woodward, Wallace S.; And Others

    1975-01-01

    Participants (12 employment counselors and 10 counselor supervisors) attending a three-week workshop on enhancing relationship skills, evaluated the Rogers, Perls, Ellis film, Three Approaches to Psychotherapy, on 15 skills. Results indicate there was general agreement between the counselors and the supervisors when judging levels of therapist…

  17. Comparison of Rogers, Perls, and Ellis on the Hill Counselor Verbal Response Category System.

    ERIC Educational Resources Information Center

    Hill, Clara E.; And Others

    1979-01-01

    Analyzes transcripts of films of Rogers, Perls, and Ellis counseling the same client, according to the Hill Counselor Verbal Response Category System. The system described verbal behaviors of the three counselors and detected behavioral differences reflective of their differing theoretical orientations. (Author)

  18. Writing (ONLINE) Space: Composing Webware in Perl.

    ERIC Educational Resources Information Center

    Hartley, Cecilia; Schendel, Ellen; Neal, Michael R.

    1999-01-01

    Points to scholarship that helped the authors think about the ideologies behind Writing Spaces, a Web-based site for computer-mediated communication that they constructed using Perl scripts. Argues that writing teachers can and should shape online spaces to facilitate their individual pedagogies rather than allowing commercial software to limit…

  19. PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-05-01

    PACCE (Perl Algorithm to Compute continuum and Equivalent Widths) computes continuum and equivalent widths. PACCE is able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies, and is also able to compute the uncertainties in the equivalent widths using photon statistics.

  20. AlleleCoder: a PERL script for coding codominant polymorphism data for PCA analysis

    USDA-ARS?s Scientific Manuscript database

    A useful biological interpretation of diploid heterozygotes is in terms of the dose of the common allele (0, 1 or 2 copies). We have developed a PERL script that converts FASTA files into coded spreadsheets suitable for Principal Component Analysis (PCA). In combination with R and R Commander, two- ...

  1. A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2000-01-01

    Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)

  2. Good Moments in Gestalt Therapy: A Descriptive Analysis of Two Perls Sessions.

    ERIC Educational Resources Information Center

    Boulet, Donald; And Others

    1993-01-01

    Analyzed two Gestalt therapy sessions conducted by Fritz Perls using category system for identifying in-session client behaviors valued by Gestalt therapists. Four judges independently rated 210 client statements. Found common pattern of therapeutic movement: initial phase dominated by building block good moments and second phase characterized by…

  3. Strategies for single-point diamond machining a large format germanium blazed immersion grating

    NASA Astrophysics Data System (ADS)

    Montesanti, R. C.; Little, S. L.; Kuzmenko, P. J.; Bixler, J. V.; Jackson, J. L.; Lown, J. G.; Priest, R. E.; Yoxall, B. E.

    2016-07-01

    A large format germanium immersion grating was flycut with a single-point diamond tool on the Precision Engineering Research Lathe (PERL) at the Lawrence Livermore National Laboratory (LLNL) in November - December 2015. The grating, referred to as 002u, has an area of 59 mm x 67 mm (along-groove and cross-groove directions), line pitch of 88 line/mm, and blaze angle of 32 degree. Based on total groove length, the 002u grating is five times larger than the previous largest grating (ZnSe) cut on PERL, and forty-five times larger than the previous largest germanium grating cut on PERL. The key risks associated with cutting the 002u grating were tool wear and keeping the PERL machine running uninterrupted in a stable machining environment. This paper presents the strategies employed to mitigate these risks, introduces pre-machining of the as-etched grating substrate to produce a smooth, flat, damage-free surface into which the grooves are cut, and reports on trade-offs that drove decisions and experimental results.

  4. Using Perls Staining to Trace the Iron Uptake Pathway in Leaves of a Prunus Rootstock Treated with Iron Foliar Fertilizers

    PubMed Central

    Rios, Juan J.; Carrasco-Gil, Sandra; Abadía, Anunciación; Abadía, Javier

    2016-01-01

    The aim of this study was to trace the Fe uptake pathway in leaves of Prunus rootstock (GF 677; Prunus dulcis × Prunus persica) plants treated with foliar Fe compounds using the Perls blue method, which detects labile Fe pools. Young expanded leaves of Fe-deficient plants grown in nutrient solution were treated with Fe-compounds using a brush. Iron compounds used were the ferrous salt FeSO4, the ferric salts Fe2(SO4)3 and FeCl3, and the chelate Fe(III)-EDTA, all of them at concentrations of 9 mM Fe. Leaf Fe concentration increases were measured at 30, 60, 90 min, and 24 h, and 70 μm-thick leaf transversal sections were obtained with a vibrating microtome and stained with Perls blue. In vitro results show that the Perls blue method is a good tool to trace the Fe uptake pathway in leaves when using Fe salts, but is not sensitive enough when using synthetic Fe(III)-chelates such as Fe(III)-EDTA and Fe(III)-IDHA. Foliar Fe fertilization increased leaf Fe concentrations with all Fe compounds used, with inorganic Fe salts causing larger leaf Fe concentration increases than Fe(III)-EDTA. Results show that Perls blue stain appeared within 30 min in the stomatal areas, indicating that Fe applied as inorganic salts was taken up rapidly via stomata. In the case of using FeSO4 a progression of the stain was seen with time toward vascular areas in the leaf blade and the central vein, whereas in the case of Fe(III) salts the stain mainly remained in the stomatal areas. Perls stain was never observed in the mesophyll areas, possibly due to the low concentration of labile Fe pools. PMID:27446123

  5. The Evaluation of Filmed Excerpts of Rogers, Perls, and Ellis by Beginning Counselor Trainees

    ERIC Educational Resources Information Center

    Kelly, F. Donald; Byrne, Thomas P.

    1977-01-01

    Students (N=29) viewed three stimulus films and rated therapeutic effectiveness of the therapists. Students were subsequently rank-ordered on the basis of skill development and assigned to one of three groups (high, middle, or low.) Results revealed an overall higher evaluation for Rogers as compared to either Perls or Ellis. (Author)

  6. Transition to a Unified System: Using Perl To Drive Library Databases and Enhance Web Site Functionality.

    ERIC Educational Resources Information Center

    Fagan, Judy Condit

    2001-01-01

    Discusses the need for libraries to routinely redesign their Web sites, and presents a case study that describes how a Perl-driven database at Southern Illinois University's library improved Web site organization and patron access, simplified revisions, and allowed staff unfamiliar with HTML to update content. (Contains 56 references.) (Author/LRW)

  7. Perl at the Joint Astronomy Centre

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie; Tilanus, Remo P. J.; Best, Casey; Prestage, Richard M.; Shimek, Pam; Glazebrook, Karl; Farrell, Tony J.

    Perl is used extensively at the JAC (UKIRT and JCMT) and because of the language's flexibility (enabling us to interface perl to any library) we are finding that it is possible to write all of our utilities in it. This simplifies support and aids code reuse (via the module system and object oriented interface) as well as shortening development time. Currently we have developed interfaces to messaging systems (ADAM and DRAMA), I/O libraries (NDF, GSD), astronomical libraries (SLALIB) and the Starlink noticeboard system (NBS). We have also developed tools to aid in data taking (the JCMT observation desk) and data processing (surf and orac-dr) This paper will briefly review the facilities available, with an emphasis on those which might be of interest to other observatories.

  8. PeRL: a circum-Arctic Permafrost Region Pond and Lake database

    NASA Astrophysics Data System (ADS)

    Muster, Sina; Roth, Kurt; Langer, Moritz; Lange, Stephan; Cresto Aleina, Fabio; Bartsch, Annett; Morgenstern, Anne; Grosse, Guido; Jones, Benjamin; Sannel, A. Britta K.; Sjöberg, Ylva; Günther, Frank; Andresen, Christian; Veremeeva, Alexandra; Lindgren, Prajna R.; Bouchard, Frédéric; Lara, Mark J.; Fortier, Daniel; Charbonneau, Simon; Virtanen, Tarmo A.; Hugelius, Gustaf; Palmtag, Juri; Siewert, Matthias B.; Riley, William J.; Koven, Charles D.; Boike, Julia

    2017-06-01

    Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1. 0 × 104 m2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL) database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002-2013) high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 with a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1. 4 × 106 km2 across the Arctic, about 17 % of the Arctic lowland ( < 300 m a.s.l.) land surface area. PeRL waterbodies with sizes of 1. 0 × 106 m2 down to 1. 0 × 102 m2 contributed up to 21 % to the total water fraction. Waterbody density ranged from 1. 0 × 10 to 9. 4 × 101 km-2. Ponds are the dominant waterbody type by number in all landscapes representing 45-99 % of the total waterbody number. The implementation of PeRL size distributions in land surface models will greatly improve the investigation and projection of surface inundation and carbon fluxes in permafrost lowlands. Waterbody maps, study area boundaries, and maps of regional permafrost landscapes including detailed metadata are available at https://doi.pangaea.de/10.1594/PANGAEA.868349.

  9. PeRL: a circum-Arctic Permafrost Region Pond and Lake database

    DOE PAGES

    Muster, Sina; Roth, Kurt; Langer, Moritz; ...

    2017-06-06

    Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1.0 × 10 4 m 2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL) database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002–2013) high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 withmore » a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1.4 × 10 6 km 2 across the Arctic, about 17 % of the Arctic lowland ( < 300 m a.s.l.) land surface area. PeRL waterbodies with sizes of 1.0 ×10 6 m 2 down to 1.0 ×10 2 m 2 contributed up to 21 % to the total water fraction. Waterbody density ranged from 1.0 ×10 to 9.4 × 10 1 km –2. Ponds are the dominant waterbody type by number in all landscapes representing 45–99 % of the total waterbody number. In conclusion, the implementation of PeRL size distributions in land surface models will greatly improve the investigation and projection of surface inundation and carbon fluxes in permafrost lowlands.« less

  10. PeRL: a circum-Arctic Permafrost Region Pond and Lake database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muster, Sina; Roth, Kurt; Langer, Moritz

    Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1.0 × 10 4 m 2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL) database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002–2013) high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 withmore » a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1.4 × 10 6 km 2 across the Arctic, about 17 % of the Arctic lowland ( < 300 m a.s.l.) land surface area. PeRL waterbodies with sizes of 1.0 ×10 6 m 2 down to 1.0 ×10 2 m 2 contributed up to 21 % to the total water fraction. Waterbody density ranged from 1.0 ×10 to 9.4 × 10 1 km –2. Ponds are the dominant waterbody type by number in all landscapes representing 45–99 % of the total waterbody number. In conclusion, the implementation of PeRL size distributions in land surface models will greatly improve the investigation and projection of surface inundation and carbon fluxes in permafrost lowlands.« less

  11. FAST: FAST Analysis of Sequences Toolbox

    PubMed Central

    Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145

  12. PERLE. Powerful energy recovery linac for experiments. Conceptual design report

    NASA Astrophysics Data System (ADS)

    Angal-Kalinin, D.; Arduini, G.; Auchmann, B.; Bernauer, J.; Bogacz, A.; Bordry, F.; Bousson, S.; Bracco, C.; Brüning, O.; Calaga, R.; Cassou, K.; Chetvertkova, V.; Cormier, E.; Daly, E.; Douglas, D.; Dupraz, K.; Goddard, B.; Henry, J.; Hutton, A.; Jensen, E.; Kaabi, W.; Klein, M.; Kostka, P.; Lasheras, N.; Levichev, E.; Marhauser, F.; Martens, A.; Milanese, A.; Militsyn, B.; Peinaud, Y.; Pellegrini, D.; Pietralla, N.; Pupkov, Y.; Rimmer, R.; Schirm, K.; Schulte, D.; Smith, S.; Stocchi, A.; Valloni, A.; Welsch, C.; Willering, G.; Wollmann, D.; Zimmermann, F.; Zomer, F.

    2018-06-01

    A conceptual design is presented of a novel energy-recovering linac (ERL) facility for the development and application of the energy recovery technique to linear electron accelerators in the multi-turn, large current and large energy regime. The main characteristics of the powerful energy recovery linac experiment facility (PERLE) are derived from the design of the Large Hadron electron Collider, an electron beam upgrade under study for the LHC, for which it would be the key demonstrator. PERLE is thus projected as a facility to investigate efficient, high current (HC) (>10 mA) ERL operation with three re-circulation passages through newly designed SCRF cavities, at 801.58 MHz frequency, and following deceleration over another three re-circulations. In its fully equipped configuration, PERLE provides an electron beam of approximately 1 GeV energy. A physics programme possibly associated with PERLE is sketched, consisting of high precision elastic electron–proton scattering experiments, as well as photo-nuclear reactions of unprecedented intensities with up to 30 MeV photon beam energy as may be obtained using Fabry–Perot cavities. The facility has further applications as a general technology test bed that can investigate and validate novel superconducting magnets (beam induced quench tests) and superconducting RF structures (structure tests with HC beams, beam loading and transients). Besides a chapter on operation aspects, the report contains detailed considerations on the choices for the SCRF structure, optics and lattice design, solutions for arc magnets, source and injector and on further essential components. A suitable configuration derived from the here presented design concept may next be moved forward to a technical design and possibly be built by an international collaboration which is being established.

  13. The Bioperl Toolkit: Perl Modules for the Life Sciences

    PubMed Central

    Stajich, Jason E.; Block, David; Boulez, Kris; Brenner, Steven E.; Chervitz, Stephen A.; Dagdigian, Chris; Fuellen, Georg; Gilbert, James G.R.; Korf, Ian; Lapp, Hilmar; Lehväslaiho, Heikki; Matsalla, Chad; Mungall, Chris J.; Osborne, Brian I.; Pocock, Matthew R.; Schattner, Peter; Senger, Martin; Stein, Lincoln D.; Stupka, Elia; Wilkinson, Mark D.; Birney, Ewan

    2002-01-01

    The Bioperl project is an international open-source collaboration of biologists, bioinformaticians, and computer scientists that has evolved over the past 7 yr into the most comprehensive library of Perl modules available for managing and manipulating life-science information. Bioperl provides an easy-to-use, stable, and consistent programming interface for bioinformatics application programmers. The Bioperl modules have been successfully and repeatedly used to reduce otherwise complex tasks to only a few lines of code. The Bioperl object model has been proven to be flexible enough to support enterprise-level applications such as EnsEMBL, while maintaining an easy learning curve for novice Perl programmers. Bioperl is capable of executing analyses and processing results from programs such as BLAST, ClustalW, or the EMBOSS suite. Interoperation with modules written in Python and Java is supported through the evolving BioCORBA bridge. Bioperl provides access to data stores such as GenBank and SwissProt via a flexible series of sequence input/output modules, and to the emerging common sequence data storage format of the Open Bioinformatics Database Access project. This study describes the overall architecture of the toolkit, the problem domains that it addresses, and gives specific examples of how the toolkit can be used to solve common life-sciences problems. We conclude with a discussion of how the open-source nature of the project has contributed to the development effort. [Supplemental material is available online at www.genome.org. Bioperl is available as open-source software free of charge and is licensed under the Perl Artistic License (http://www.perl.com/pub/a/language/misc/Artistic.html). It is available for download at http://www.bioperl.org. Support inquiries should be addressed to bioperl-l@bioperl.org.] PMID:12368254

  14. Study of different thermal processes on boron-doped PERL cells

    NASA Astrophysics Data System (ADS)

    Li, Wenjia; Wang, Zhenjiao; Han, Peiyu; Lu, Hongyan; Yang, Jian; Guo, Ying; Shi, Zhengrong; Li, Guohua

    2014-08-01

    In this paper, three kinds of thermal processes for boron-doped PERL cells were investigated. These are the forming gas annealing (FGA), the rapid thermal (RTP) and the low temperature annealing processes. FGA was introduced after laser ablation and doping in order to increase minority carrier lifetime by hydrogenating the trapping centers. Subsequent evaluation revealed considerable enhancement of minority carrier lifetime (from 150 μs to 240 μs) and the implied Voc (from 660 mV to 675 mV). After aluminum sputtering, three actual peak temperatures (370 °C, 600 °C and 810 °C) of RTP (as it occurs in the compressed air environment used in our experiment) were utilized to form a contact between the metal and the semi-conductor. It is concluded that only low temperature (lower than 600 °C) firing could create boron back surface field and high quality rear reflector. Lastly, a method of improving the performance of finished PERL cells which did not experience high temperature (over 800 °C) firing was investigated. Finished cells undergone low temperature annealing in N2 atmosphere at 150 °C for 15 min produced 0.44% absolute increase in PERL cells. The enhancement of low temperature annealing originally comes from the activation of passivated boron which is deactivated during FGA.

  15. The Fundamental Issues Study within the British BMD Review

    DTIC Science & Technology

    1998-02-01

    also be considered. Nevertheless, how formidable a challenge is posed by the ascent release of submunitions is acknowledged by Richard Garwin on the...Arguably, the crunch came in February 1987 when Richard Perle, visiting London as US Assistant Secretary for Defense, extolled a strong SDI as the...resignation of Richard Perle. That year was also to see the departure from political office in the Pentagon of four other SDI stalwarts: Frank

  16. Rear surface effects in high efficiency silicon solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenham, S.R.; Robinson, S.J.; Dai, X.

    1994-12-31

    Rear surface effects in PERL solar cells can lead not only to degradation in the short circuit current and open circuit voltage, but also fill factor. Three mechanisms capable of changing the effective rear surface recombination velocity with injection level are identified, two associated with oxidized p-type surfaces, and the third with two dimensional effects associated with a rear floating junction. Each of these will degrade the fill factor if the range of junction biases corresponding to the rear surface transition, coincides with the maximum power point. Despite the identified non idealities, PERL cells with rear floating junctions (PERF cells)more » have achieved record open circuit voltages for silicon solar cells, while simultaneously achieving fill factor improvements relative to standard PERL solar cells. Without optimization, a record efficiency of 22% has been demonstrated for a cell with a rear floating junction. The results of both theoretical and experimental studies are provided.« less

  17. PeRL: A circum-Arctic Permafrost Region Pond and Lake database

    USGS Publications Warehouse

    Muster, Sina; Roth, Kurt; Langer, Moritz; Lange, Stephan; Cresto Aleina, Fabio; Bartsch, Annett; Morgenstern, Anne; Grosse, Guido; Jones, Benjamin M.; Sannel, A.B.K.; Sjoberg, Ylva; Gunther, Frank; Andresen, Christian; Veremeeva, Alexandra; Lindgren, Prajna R.; Bouchard, Frédéric; Lara, Mark J.; Fortier, Daniel; Charbonneau, Simon; Virtanen, Tarmo A.; Hugelius, Gustaf; Palmtag, J.; Siewert, Matthias B.; Riley, William J.; Koven, Charles; Boike, Julia

    2017-01-01

    Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1. 0 × 104 m2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL) database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002–2013) high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 with a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1. 4 × 106 km2 across the Arctic, about 17 % of the Arctic lowland ( <  300 m a.s.l.) land surface area. PeRL waterbodies with sizes of 1. 0 × 106 m2 down to 1. 0 × 102 m2 contributed up to 21 % to the total water fraction. Waterbody density ranged from 1. 0 × 10 to 9. 4 × 101 km−2. Ponds are the dominant waterbody type by number in all landscapes representing 45–99 % of the total waterbody number. The implementation of PeRL size distributions in land surface models will greatly improve the investigation and projection of surface inundation and carbon fluxes in permafrost lowlands. Waterbody maps, study area boundaries, and maps of regional permafrost landscapes including detailed metadata are available at https://doi.pangaea.de/10.1594/PANGAEA.868349.

  18. Mutations in the Circadian Gene period Alter Behavioral and Biochemical Responses to Ethanol in Drosophila

    PubMed Central

    Liao, Jennifer; Seggio, Joseph A.; Ahmad, S. Tariq

    2016-01-01

    Clock genes, such as period, which maintain an organism’s circadian rhythm, can have profound effects on metabolic activity, including ethanol metabolism. In turn, ethanol exposure has been shown in Drosophila and mammals to cause disruptions of the circadian rhythm. Previous studies from our labs have shown that larval ethanol exposure disrupted the free-running period and period expression of Drosophila. In addition, a recent study has shown that arrhythmic flies show no tolerance to ethanol exposure. As such, Drosophila period mutants, which have either a shorter than wild-type free-running period (perS) or a longer one (perL), may also exhibit altered responses to ethanol due to their intrinsic circadian differences. In this study, we tested the initial sensitivity and tolerance of ethanol exposure on Canton-S, perS, and perL, and then measured their Alcohol Dehydrogenase (ADH) and body ethanol levels. We showed that perL flies had slower sedation rate, longer recovery from ethanol sedation, and generated higher tolerance for sedation upon repeated ethanol exposure compared to Canton-S wild-type flies. Furthermore, perL flies had lower ADH activity and had a slower ethanol clearance compared to wild-type flies. The findings of this study suggest that period mutations influence ethanol induced behavior and ethanol metabolism in Drosophila and that flies with longer circadian periods are more sensitive to ethanol exposure. PMID:26802726

  19. A resource for benchmarking the usefulness of protein structure models.

    PubMed

    Carbajo, Daniel; Tramontano, Anna

    2012-08-02

    Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by non-academics: No.

  20. Learning SAS’s Perl Regular Expression Matching the Easy Way: By Doing

    DTIC Science & Technology

    2015-01-12

    Doing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Genovesi 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...regex_learning_tool allows both beginner and expert to efficiently practice PRX matching by selecting and processing only the match records that the user is interested...perl regular expression and/or source string. The regex_learning_tool allows both beginner and expert to efficiently practice PRX matching by

  1. Solar-energy-system-performance evaluation: Perl-Mack Enterprises, Inc. , single-family residence, Denver, Colorado, April 1978-March 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, R.V.

    1979-01-01

    The Perl-Mack Enterprises, Inc. site is a single-family dwelling whose solar heating system is designed to provide approximately 68% of the annual space heating and hot water requirements. The system consists of an array of flat plate collectors using a water-propylene glycol solution, a concrete water storage tank, and an auxiliary gas burner. The system is described, and its performance is analyzed using a system energy balance technique. (LEW)

  2. Initiated Protocol Telephony Feasibility for the US Navy, Embedded Proof-of-Concept

    DTIC Science & Technology

    2011-03-01

    2.1 Generating Certificates Using Open SSL 1. OpenSSL can be used to generate certificates. There are a number of helper scripts written in Perl that...help with the creation and maintenance of the certificate and keys. OpenSSL is available from a number of sites, i.e., slproweb.com. The default...installation is adequate although it may be useful to add the OpenSSL \\bin directory to the system environment variable PATH. Perl is also available

  3. An interactive HTML ocean nowcast GUI based on Perl and JavaScript

    NASA Astrophysics Data System (ADS)

    Sakalaukus, Peter J.; Fox, Daniel N.; Louise Perkins, A.; Smedstad, Lucy F.

    1999-02-01

    We describe the use of Hyper Text Markup Language (HTML), JavaScript code, and Perl I/O to create and validate forms in an Internet-based graphical user interface (GUI) for the Naval Research Laboratory (NRL) Ocean models and Assimilation Demonstration System (NOMADS). The resulting nowcast system can be operated from any compatible browser across the Internet, for although the GUI was prepared in a Netscape browser, it used no Netscape extensions. Code available at: http://www.iamg.org/CGEditor/index.htm

  4. BioC implementations in Go, Perl, Python and Ruby

    PubMed Central

    Liu, Wanli; Islamaj Doğan, Rezarta; Kwon, Dongseop; Marques, Hernani; Rinaldi, Fabio; Wilbur, W. John; Comeau, Donald C.

    2014-01-01

    As part of a communitywide effort for evaluating text mining and information extraction systems applied to the biomedical domain, BioC is focused on the goal of interoperability, currently a major barrier to wide-scale adoption of text mining tools. BioC is a simple XML format, specified by DTD, for exchanging data for biomedical natural language processing. With initial implementations in C++ and Java, BioC provides libraries of code for reading and writing BioC text documents and annotations. We extend BioC to Perl, Python, Go and Ruby. We used SWIG to extend the C++ implementation for Perl and one Python implementation. A second Python implementation and the Ruby implementation use native data structures and libraries. BioC is also implemented in the Google language Go. BioC modules are functional in all of these languages, which can facilitate text mining tasks. BioC implementations are freely available through the BioC site: http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net/ PMID:24961236

  5. Uric acid lowering to prevent kidney function loss in diabetes: the preventing early renal function loss (PERL) allopurinol study.

    PubMed

    Maahs, David M; Caramori, Luiza; Cherney, David Z I; Galecki, Andrzej T; Gao, Chuanyun; Jalal, Diana; Perkins, Bruce A; Pop-Busui, Rodica; Rossing, Peter; Mauer, Michael; Doria, Alessandro

    2013-08-01

    Diabetic kidney disease causes significant morbidity and mortality among people with type 1 diabetes (T1D). Intensive glucose and blood pressure control have thus far failed to adequately curb this problem and therefore a major need for novel treatment approaches exists. Multiple observations link serum uric acid levels to kidney disease development and progression in diabetes and strongly argue that uric acid lowering should be tested as one such novel intervention. A pilot of such a trial, using allopurinol, is currently being conducted by the Preventing Early Renal Function Loss (PERL) Consortium. Although the PERL trial targets T1D individuals at highest risk of kidney function decline, the use of allopurinol as a renoprotective agent may also be relevant to a larger segment of the population with diabetes. As allopurinol is inexpensive and safe, it could be cost-effective even for relatively low-risk patients, pending the completion of appropriate trials at earlier stages.

  6. 22.7% efficient PERL silicon solar cell module with a textured front surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, J.; Wang, A.; Campbell, P.

    1997-12-31

    This paper describes a solar cell module efficiency of 22.7% independently measured at Sandia National Laboratories. This is the highest ever confirmed efficiency for a photovoltaic module of this size achieved by cells made from any material. This 778-cm{sup 2} module used 40 large-area double layer antireflection coated PERL (passivated emitter, rear locally-diffused) silicon cells of average efficiency of 23.1%. A textured front module surface considerably improve the module efficiency. Also reported is an independently confirmed efficiency of 23.7% for a 21.6 cm{sup 2} cell of the type used in the module. Using these PERL cells in the 1996 Worldmore » Solar Challenge solar car race from Darwin to Adelaide across Australia, Honda`s Dream and Aisin Seiki`s Aisol III were placed first and third, respectively. Honda also set a new record by reaching Adelaide in four days with an average speed of 90km/h over the 3010 km course.« less

  7. BioC implementations in Go, Perl, Python and Ruby.

    PubMed

    Liu, Wanli; Islamaj Doğan, Rezarta; Kwon, Dongseop; Marques, Hernani; Rinaldi, Fabio; Wilbur, W John; Comeau, Donald C

    2014-01-01

    As part of a communitywide effort for evaluating text mining and information extraction systems applied to the biomedical domain, BioC is focused on the goal of interoperability, currently a major barrier to wide-scale adoption of text mining tools. BioC is a simple XML format, specified by DTD, for exchanging data for biomedical natural language processing. With initial implementations in C++ and Java, BioC provides libraries of code for reading and writing BioC text documents and annotations. We extend BioC to Perl, Python, Go and Ruby. We used SWIG to extend the C++ implementation for Perl and one Python implementation. A second Python implementation and the Ruby implementation use native data structures and libraries. BioC is also implemented in the Google language Go. BioC modules are functional in all of these languages, which can facilitate text mining tasks. BioC implementations are freely available through the BioC site: http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net/ Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  8. Yaxx: Yet another X-ray extractor

    NASA Astrophysics Data System (ADS)

    Aldcroft, Tom

    2013-06-01

    Yaxx is a Perl script that facilitates batch data processing using Perl open source software and commonly available software such as CIAO/Sherpa, S-lang, SAS, and FTOOLS. For Chandra and XMM analysis it includes automated spectral extraction, fitting, and report generation. Yaxx can be run without climbing an extensive learning curve; even so, yaxx is highly configurable and can be customized to support complex analysis. yaxx uses template files and takes full advantage of the unique Sherpa / S-lang environment to make much of the processing user configurable. Although originally developed with an emphasis on X-ray data analysis, yaxx evolved to be a general-purpose pipeline scripting package.

  9. JEnsembl: a version-aware Java API to Ensembl data systems.

    PubMed

    Paterson, Trevor; Law, Andy

    2012-11-01

    The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).

  10. Pastoral Group Counselling at a High Security Prison in Israel: Integrating Pierre Janet's Psychological Analysis with Fritz Perls' Gestalt Therapy.

    PubMed

    Brown, Paul; Brown, Marta

    2015-03-01

    This is a report of a short-term, pastoral counselling group conducted with Jewish internees in a high security prison in Israel. It was held as an adjunct to daily secular individual and group counselling and rehabilitation run by the Department of Social Work. Pastoral counselling employed spiritual and psychosocial methodologies to reduce anger, improve prisoner frustration tolerance, and develop a sense of self-efficacy and communal identity. It combined semi-didactic scriptural input with Pierre Janet's personality model, Fritz Perls' gestalt therapy, and analysis of the group process. © The Author(s) 2015 Reprints and permissions:sagepub.co.uk/journalsPermissions.nav.

  11. A Ruby API to query the Ensembl database for genomic features.

    PubMed

    Strozzi, Francesco; Aerts, Jan

    2011-04-01

    The Ensembl database makes genomic features available via its Genome Browser. It is also possible to access the underlying data through a Perl API for advanced querying. We have developed a full-featured Ruby API to the Ensembl databases, providing the same functionality as the Perl interface with additional features. A single Ruby API is used to access different releases of the Ensembl databases and is also able to query multi-species databases. Most functionality of the API is provided using the ActiveRecord pattern. The library depends on introspection to make it release independent. The API is available through the Rubygem system and can be installed with the command gem install ruby-ensembl-api.

  12. JEnsembl: a version-aware Java API to Ensembl data systems

    PubMed Central

    Paterson, Trevor; Law, Andy

    2012-01-01

    Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789

  13. Limiting loss mechanisms in 23% efficient silicon solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aberle, A.G.; Altermatt, P.P.; Heiser, G.

    1995-04-01

    The ``passivated emitter and rear locally diffused`` (PERL) silicon solar cell structure presently demonstrates the highest terrestrial performance of any silicon-based solar cell. This paper presents a detailed investigation of the limiting loss mechanisms in PERL cells exhibiting independently confirmed 1-sun efficiencies of up to 23.0%. Optical, resistive, and recombinative losses are all analyzed under the full range of solar cell operating conditions with the aid of two-dimensional (2D) device simulations. The analysis is based on measurements of the reflectance, quantum efficiency, dark and illuminated current--voltage ({ital I}--{ital V}) characteristics, and properties of the Si--SiO{sub 2} interfaces employed on thesemore » cells for surface passivation. Through the use of the 2D simulations, particular attention has been paid to the magnitudes of the spatially resolved recombination losses in these cells. It is shown that approximately 50% of the recombination losses at the 1-sun maximum power point occur in the base of the cells, followed by recombination losses at the rear and front oxidized surfaces (25% and {lt}25%, respectively). The relatively low fill factors of PERL cells are principally a result of resistive losses; however, the recombination behavior in the base and at the rear surface also contributes. This work predicts that the efficiency of 23% PERL cells could be increased by about 0.7% absolute if ohmic losses were eliminated, a further 1.1% absolute if there were no reflection losses at the nonmetallized front surface regions, about 2.0% by introducing ideal light trapping and eliminating shading losses due to the front metallization, and by about 3.7% absolute if the device had no defect-related recombination losses. New design rules for future efficiency improvements, evident from this analysis, are also presented. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.« less

  14. Speech-rhythm characteristics of client-centered, Gestalt, and rational-emotive therapy interviews.

    PubMed

    Chen, C L

    1981-07-01

    The aim of this study was to discover whether client-centered, Gestalt, and rational-emotive psychotherapy interviews could be described and differentiated on the basis of quantitative measurement of their speech rhythms. These measures were taken from the sound portion of a film showing interviews by Carl Rogers, Frederick Perls, and Albert Ellis. The variables used were total session and percentage of speaking times, speaking turns, vocalizations, interruptions, inside and switching pauses, and speaking rates. The three types of interview had very distinctive patterns of speech-rhythm variables. These patterns suggested that Rogers's Client-centered therapy interview was patient dominated, that Ellis's rational-emotive therapy interview was therapist dominated, and that Perls's Gestalt therapy interview was neither therapist nor patient dominated.

  15. Information Metacatalog for a Grid

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    SWIM is a Software Information Metacatalog that gathers detailed information about the software components and packages installed on a grid resource. Information is currently gathered for Executable and Linking Format (ELF) executables and shared libraries, Java classes, shell scripts, and Perl and Python modules. SWIM is built on top of the POUR framework, which is described in the preceding article. SWIM consists of a set of Perl modules for extracting software information from a system, an XML schema defining the format of data that can be added by users, and a POUR XML configuration file that describes how these elements are used to generate periodic, on-demand, and user-specified information. Periodic software information is derived mainly from the package managers used on each system. SWIM collects information from native package managers in FreeBSD, Solaris, and IRX as well as the RPM, Perl, and Python package managers on multiple platforms. Because not all software is available, or installed in package form, SWIM also crawls the set of relevant paths from the File System Hierarchy Standard that defines the standard file system structure used by all major UNIX distributions. Using these two techniques, the vast majority of software installed on a system can be located. SWIM computes the same information gathered by the periodic routines for specific files on specific hosts, and locates software on a system given only its name and type.

  16. COD::CIF::Parser: an error-correcting CIF parser for the Perl language.

    PubMed

    Merkys, Andrius; Vaitkus, Antanas; Butkus, Justas; Okulič-Kazarinas, Mykolas; Kairys, Visvaldas; Gražulis, Saulius

    2016-02-01

    A syntax-correcting CIF parser, COD::CIF::Parser , is presented that can parse CIF 1.1 files and accurately report the position and the nature of the discovered syntactic problems. In addition, the parser is able to automatically fix the most common and the most obvious syntactic deficiencies of the input files. Bindings for Perl, C and Python programming environments are available. Based on COD::CIF::Parser , the cod-tools package for manipulating the CIFs in the Crystallography Open Database (COD) has been developed. The cod-tools package has been successfully used for continuous updates of the data in the automated COD data deposition pipeline, and to check the validity of COD data against the IUCr data validation guidelines. The performance, capabilities and applications of different parsers are compared.

  17. Perl One-Liners: Bridging the Gap Between Large Data Sets and Analysis Tools.

    PubMed

    Hokamp, Karsten

    2015-01-01

    Computational analyses of biological data are becoming increasingly powerful, and researchers intending on carrying out their own analyses can often choose from a wide array of tools and resources. However, their application might be obstructed by the wide variety of different data formats that are in use, from standard, commonly used formats to output files from high-throughput analysis platforms. The latter are often too large to be opened, viewed, or edited by standard programs, potentially leading to a bottleneck in the analysis. Perl one-liners provide a simple solution to quickly reformat, filter, and merge data sets in preparation for downstream analyses. This chapter presents example code that can be easily adjusted to meet individual requirements. An online version is available at http://bioinf.gen.tcd.ie/pol.

  18. MetaPlotR: a Perl/R pipeline for plotting metagenes of nucleotide modifications and other transcriptomic sites.

    PubMed

    Olarerin-George, Anthony O; Jaffrey, Samie R

    2017-05-15

    An increasing number of studies are mapping protein binding and nucleotide modifications sites throughout the transcriptome. Often, these sites cluster in certain regions of the transcript, giving clues to their function. Hence, it is informative to summarize where in the transcript these sites occur. A metagene is a simple and effective tool for visualizing the distribution of sites along a simplified transcript model. In this work, we introduce MetaPlotR, a Perl/R pipeline for creating metagene plots. The code and associated tutorial are available at https://github.com/olarerin/metaPlotR . srj2003@med.cornell.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. Detecting SNPs and estimating allele frequencies in clonal bacterial populations by sequencing pooled DNA.

    PubMed

    Holt, Kathryn E; Teo, Yik Y; Li, Heng; Nair, Satheesh; Dougan, Gordon; Wain, John; Parkhill, Julian

    2009-08-15

    Here, we present a method for estimating the frequencies of SNP alleles present within pooled samples of DNA using high-throughput short-read sequencing. The method was tested on real data from six strains of the highly monomorphic pathogen Salmonella Paratyphi A, sequenced individually and in a pool. A variety of read mapping and quality-weighting procedures were tested to determine the optimal parameters, which afforded > or =80% sensitivity of SNP detection and strong correlation with true SNP frequency at poolwide read depth of 40x, declining only slightly at read depths 20-40x. The method was implemented in Perl and relies on the opensource software Maq for read mapping and SNP calling. The Perl script is freely available from ftp://ftp.sanger.ac.uk/pub/pathogens/pools/.

  20. SCTE: An open-source Perl framework for testing equipment control and data acquisition

    NASA Astrophysics Data System (ADS)

    Mostaço-Guidolin, Luiz C.; Frigori, Rafael B.; Ruchko, Leonid; Galvão, Ricardo M. O.

    2012-07-01

    SCTE intends to provide a simple, yet powerful, framework for building data acquisition and equipment control systems for experimental Physics, and correlated areas. Via its SCTE::Instrument module, RS-232, USB, and LAN buses are supported, and the intricacies of hardware communication are encapsulated underneath an object oriented abstraction layer. Written in Perl, and using the SCPI protocol, enabled instruments can be easily programmed to perform a wide variety of tasks. While this work presents general aspects of the development of data acquisition systems using the SCTE framework, it is illustrated by particular applications designed for the calibration of several in-house developed devices for power measurement in the tokamak TCABR Alfvén Waves Excitement System. Catalogue identifier: AELZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License Version 3 No. of lines in distributed program, including test data, etc.: 13 811 No. of bytes in distributed program, including test data, etc.: 743 709 Distribution format: tar.gz Programming language: Perl version 5.10.0 or higher. Computer: PC. SCPI capable digital oscilloscope, with RS-232, USB, or LAN communication ports, null modem, USB, or Ethernet cables Operating system: GNU/Linux (2.6.28-11), should also work on any Unix-based operational system Classification: 4.14 External routines: Perl modules: Device::SerialPort, Term::ANSIColor, Math::GSL, Net::HTTP. Gnuplot 4.0 or higher Nature of problem: Automation of experiments and data acquisition often requires expensive equipment and in-house development of software applications. Nowadays personal computers and test equipment come with fast and easy-to-use communication ports. Instrument vendors often supply application programs capable of controlling such devices, but are very restricted in terms of functionalities. For instance, they are not capable of controlling more than one test equipment at a same time or to automate repetitive tasks. SCTE provides a way of using auxiliary equipment in order to automate experiment procedures at low cost using only free, and open-source operational system and libraries. Solution method: SCTE provides a Perl module that implements RS-232, USB, and LAN communication allowing the use of SCPI capable instruments [1]. Therefore providing a straightforward way of creating automation and data acquisition applications using personal computers and testing instruments [2]. SCPI Consortium, Standard Commands for Programmable Instruments, 1999, http://www.scpiconsortium.org. L.C.B. Mostaço-Guidolin, Determinação da configuração de ondas de Alfvén excitadas no tokamak TCABR, Master's thesis, Universidade de São Paulo (2007), http://www.teses.usp.br/teses/disponiveis/43/43134/tde-23042009-230419/.

  1. Three Psychotherapies Examined: Ellis, Rogers, Perls

    ERIC Educational Resources Information Center

    Stoten, J.; Goos, W.

    1974-01-01

    This study uses Bales' Interaction Process Analysis (I. P. A.) to identify significant process elements in counselling and psychotherapy. For this purpose, the film "Three Approaches to Psychotherapy" was analysed. (Editor)

  2. ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectropscopy using IFEFFIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravel, B.; Newville, M.; UC)

    2010-07-20

    A software package for the analysis of X-ray absorption spectroscopy (XAS) data is presented. This package is based on the IFEFFIT library of numerical and XAS algorithms and is written in the Perl programming language using the Perl/Tk graphics toolkit. The programs described here are: (i) ATHENA, a program for XAS data processing, (ii) ARTEMIS, a program for EXAFS data analysis using theoretical standards from FEFF and (iii) HEPHAESTUS, a collection of beamline utilities based on tables of atomic absorption data. These programs enable high-quality data analysis that is accessible to novices while still powerful enough to meet the demandsmore » of an expert practitioner. The programs run on all major computer platforms and are freely available under the terms of a free software license.« less

  3. MOCASSIN-prot software

    USDA-ARS?s Scientific Manuscript database

    MOCASSIN-prot is a software, implemented in Perl and Matlab, for constructing protein similarity networks to classify proteins. Both domain composition and quantitative sequence similarity information are utilized in constructing the directed protein similarity networks. For each reference protein i...

  4. Personal Change and Intervention Style

    ERIC Educational Resources Information Center

    Andrews, John D. W.

    1977-01-01

    Presents a theory of personal change and analyzes growth-producing interventions using examples from the film, "Three Approaches to Psychotherapy". Compares the styles of Carl Rogers, Frttz Perls, and Albert Ellis to illustrate the theory. (Editor/RK)

  5. Person-Centered Gestalt Therapy: A Synthesis.

    ERIC Educational Resources Information Center

    Herlihy, Barbara

    1985-01-01

    Highlights the similarities between the person-centered approach to counseling of Carl Rogers and the Gestalt therapy of Fritz Perls. Discusses implementation of the two approaches and suggests they may be synthesized into a person-centered Gestalt therapy. (MCF)

  6. Gestalt Workshops: Suggested In-Service Training for Teachers.

    ERIC Educational Resources Information Center

    Fiordo, Richard

    1981-01-01

    Fritz Perls' Gestalt Workshops are explained and recommended for inservice training for teachers. Since Gestalt Workshops increase their participants' growth, awareness, and integration personally and environmentally, their benefit to classroom teachers would be direct and dramatic. (Author)

  7. i-ADHoRe 2.0: an improved tool to detect degenerated genomic homology using genomic profiles.

    PubMed

    Simillion, Cedric; Janssens, Koen; Sterck, Lieven; Van de Peer, Yves

    2008-01-01

    i-ADHoRe is a software tool that combines gene content and gene order information of homologous genomic segments into profiles to detect highly degenerated homology relations within and between genomes. The new version offers, besides a significant increase in performance, several optimizations to the algorithm, most importantly to the profile alignment routine. As a result, the annotations of multiple genomes, or parts thereof, can be fed simultaneously into the program, after which it will report all regions of homology, both within and between genomes. The i-ADHoRe 2.0 package contains the C++ source code for the main program as well as various Perl scripts and a fully documented Perl API to facilitate post-processing. The software runs on any Linux- or -UNIX based platform. The package is freely available for academic users and can be downloaded from http://bioinformatics.psb.ugent.be/

  8. Analyzing multiple data sets by interconnecting RSAT programs via SOAP Web services: an example with ChIP-chip data.

    PubMed

    Sand, Olivier; Thomas-Chollier, Morgane; Vervisch, Eric; van Helden, Jacques

    2008-01-01

    This protocol shows how to access the Regulatory Sequence Analysis Tools (RSAT) via a programmatic interface in order to automate the analysis of multiple data sets. We describe the steps for writing a Perl client that connects to the RSAT Web services and implements a workflow to discover putative cis-acting elements in promoters of gene clusters. In the presented example, we apply this workflow to lists of transcription factor target genes resulting from ChIP-chip experiments. For each factor, the protocol predicts the binding motifs by detecting significantly overrepresented hexanucleotides in the target promoters and generates a feature map that displays the positions of putative binding sites along the promoter sequences. This protocol is addressed to bioinformaticians and biologists with programming skills (notions of Perl). Running time is approximately 6 min on the example data set.

  9. Adding EUNIS and VAULT rocket data to the VSO with Modern Perl frameworks

    NASA Astrophysics Data System (ADS)

    Mansky, Edmund

    2017-08-01

    A new Perl code is described, that uses the modern Object-oriented Moose framework, to add EUNIS and VAULT rocket data to the Virtual Solar Observatory website. The code permits the easy fixing of FITS header fields in the case where some FITS fields that are required are missing from the original data files. The code makes novel use of the Moose extensions “before” and “after” to build in dependencies so that database creation of tables occurs before the loading of data, and that the validation of file-dependent tables occurs after the loading is completed. Also described is the computation and loading of the deferred FITS field CHECKSUM into the database following the loading and validation of the file-dependent tables. The loading of the EUNIS 2006 and 2007 flight data, and the VAULT 2.0 flight data is described in detail as illustrative examples.

  10. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE PAGES

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander; ...

    2015-03-20

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  11. Pressure Ratio to Thermal Environments

    NASA Technical Reports Server (NTRS)

    Lopez, Pedro; Wang, Winston

    2012-01-01

    A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.

  12. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales.

    PubMed

    Riccardi, Demian; Parks, Jerry M; Johs, Alexander; Smith, Jeremy C

    2015-04-27

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. The core is well-tested, well-documented, and easy to install across computational platforms. The goal of the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, an abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.

  13. Laser doping of boron-doped Si paste for high-efficiency silicon solar cells

    NASA Astrophysics Data System (ADS)

    Tomizawa, Yuka; Imamura, Tetsuya; Soeda, Masaya; Ikeda, Yoshinori; Shiro, Takashi

    2015-08-01

    Boron laser doping (LD) is a promising technology for high-efficiency solar cells such as p-type passivated locally diffused solar cells and n-type Si-wafer-based solar cells. We produced a printable phosphorus- or boron-doped Si paste (NanoGram® Si paste/ink) for use as a diffuser in the LD process. We used the boron LD process to fabricate high-efficiency passivated emitter and rear locally diffused (PERL) solar cells. PERL solar cells on Czochralski Si (Cz-Si) wafers yielded a maximum efficiency of 19.7%, whereas the efficiency of a reference cell was 18.5%. Fill factors above 79% and open circuit voltages above 655 mV were measured. We found that the boron-doped area effectively performs as a local boron back surface field (BSF). The characteristics of the solar cell formed using NanoGram® Si paste/ink were better than those of the reference cell.

  14. A RESTful Service Oriented Architecture for Science Data Processing

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Tilmes, C.; Durbin, P.; Masuoka, E.

    2012-12-01

    The Atmospheric Composition Processing System is an implementation of a RESTful Service Oriented Architecture which handles incoming data from the Ozone Monitoring Instrument and the Ozone Monitoring and Profiler Suite aboard the Aura and NPP spacecrafts respectively. The system has been built entirely from open source components, such as Postgres, Perl, and SQLite and has leveraged the vast resources of the Comprehensive Perl Archive Network (CPAN). The modular design of the system also allows for many of the components to be easily released and integrated into the CPAN ecosystem and reused independently. At minimal expense, the CPAN infrastructure and community provide peer review, feedback and continuous testing in a wide variety of environments and architectures. A well defined set of conventions also facilitates dependency management, packaging, and distribution of code. Test driven development also provides a way to ensure stability despite a continuously changing base of dependencies.

  15. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  16. Thermoreception and nociception of the skin: a classic paper of Bessou and Perl and analyses of thermal sensitivity during a student laboratory exercise.

    PubMed

    Kuhtz-Buschbeck, Johann P; Andresen, Wiebke; Göbel, Stephan; Gilster, René; Stick, Carsten

    2010-06-01

    About four decades ago, Perl and collaborators were the first ones who unambiguously identified specifically nociceptive neurons in the periphery. In their classic work, they recorded action potentials from single C-fibers of a cutaneous nerve in cats while applying carefully graded stimuli to the skin (Bessou P, Perl ER. Response of cutaneous sensory units with unmyelinated fibers to noxious stimuli. J Neurophysiol 32: 1025-1043, 1969). They discovered polymodal nociceptors, which responded to mechanical, thermal, and chemical stimuli in the noxious range, and differentiated them from low-threshold thermoreceptors. Their classic findings form the basis of the present method that undergraduate medical students experience during laboratory exercises of sensory physiology, namely, quantitative testing of the thermal detection and pain thresholds. This diagnostic method examines the function of thin afferent nerve fibers. We collected data from nearly 300 students that showed that 1) women are more sensitive to thermal detection and thermal pain at the thenar than men, 2) habituation shifts thermal pain thresholds during repetititve testing, 3) the cold pain threshold is rather variable and lower when tested after heat pain than in the reverse case (order effect), and 4) ratings of pain intensity on a visual analog scale are correlated with the threshold temperature for heat pain but not for cold pain. Median group results could be reproduced in a retest. Quantitative sensory testing of thermal thresholds is feasible and instructive in the setting of a laboratory exercise and is appreciated by the students as a relevant and interesting technique.

  17. Jamaica: a middle-aged program searches for new horizons.

    PubMed

    1984-01-01

    The advertising and marketing consultant for Jamaica's Commercial Distribution of Contraceptives (JCDC) program, states that the program has reached a state of maturity that has resulted in some inertia. Although still the leader among contraceptive social marketing (CSM) programs in reaching the greatest percentage of its target market, product sales are no longer on an upswing, and retail outlets are not increasing in number. The project is hoping that the introduction of a new thin condom can help, but more than 1 new product may be needed to recapture momentum. The JCDC began in 1974 when Westinghouse Health Systems won a 3 year Agency for International Development (AID) award to create a Jamaican CSM program. Challenges facing the new social marketing project included: oral contraceptives (OCs) were sold only by prescription; most pharmacies were located in urban areas; many consumers associated condoms with prostitution and disease; and retailers were reluctant to carry contraceptives and ignorant of OC side effects. The 1st breakthrough came when Westinghouse obtained government permission to sell a project pill without prescription. After market research, project managers chose the name "Perle" for the JCDC's pill, manufactured in the US by Syntex as Noriday. "Panther" became the project's condom. Prices were set at US17 cents for a Panther 3-pack and 34 cents for a Perle cycle. Advertising messages appeared on television, radio, bus shelters, cinema screens, billboards, and point of purchase displays. By the end of the 1st year's sales, a soft goods manufacturer had asked permission to produce Panther T-shirts and a Reggae composer had popularized songs about the product. Such promotional tactics boosted sales of all contraceptives on the island. About 690,000 Panther condoms and 450,000 other brands were sold in 1976; 195,000 Perle cycles were purchased compared with 135,000 cycles for all other brands combined. By 1977, Westinghouse was reducing advertising and concentrating on expanding retail sales outlets. Panther was being sold through 1108 outlets; Perle was distributed via 267 predominantly pharmacy outlets. In 1977 AID's contract with Westinghouse ended and the Jamaican National Family Planning Board took over the project management. With its subsidy markedly reduced, the JCDC soon was experiencing difficulty in Jamaica's troubled economy; as well as difficulty in expanding sales outlet. Despite the project's financial pinch, the JCDC has -- with some success -- used imaginative tactics like contests to spur sales.

  18. The Potential of CGI: Using Pre-Built CGI Scripts to Make Interactive Web Pages.

    ERIC Educational Resources Information Center

    Nackerud, Shane A.

    1998-01-01

    Describes CGI (Common Gateway Interface) scripts that are available on the Web and explains how librarians can use them to make Web pages more interactive. Topics include CGI security; Perl scripts; UNIX; and HTML. (LRW)

  19. The Surrogate Self

    ERIC Educational Resources Information Center

    Gunnison, Hugh

    1976-01-01

    The use of the "surrogate self" in counseling is a simple Gestalt-like role-playing technique (Perls 1969) that can be especially effective when the client has begun to see the counselor as a trusted, caring, and understanding person. The role-playing is described. (Author/EJT)

  20. Climate Prediction Center - Reanalysis: Atmospheric Data

    Science.gov Websites

    files; i.e., wgrib for GRIB-2 files wgrib2mv,wgrib2ms parallel processing with wgrib2 grb1to2.pl perl US government, DOC, NWS, NCEP or CPC. All spelling errors are property of the finder. comments

  1. A Model of RHIC Using the Unified Accelerator Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat, F.; Tepikian, S.; Trahern, C. G.

    1998-01-01

    The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homo­geneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lat­tice description while assigning specificmore » characteristics to individual elements, such as measured field har­monics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator phys­ics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC spe­cific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less

  2. GenomeVista

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poliakov, Alexander; Couronne, Olivier

    2002-11-04

    Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less

  3. Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies.

    PubMed

    Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki

    2012-01-01

    Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.

  4. Neuro-Linguistic Programming: The New Eclectic Therapy.

    ERIC Educational Resources Information Center

    Betts, Nicoletta C.

    Richard Bandler and John Grinder developed neuro-linguisitc programming (NLP) after observing "the magical skills of potent psychotherapists" Frederick Perls, Virginia Satir, and Milton Erikson. They compiled the most effective techniques for building rapport, gathering data, and influencing change in psychotherapy, offering them only as…

  5. Multidimensional Perception of Counselor Behavior

    ERIC Educational Resources Information Center

    Barak, Azy; LaCrosse, Michael B.

    1975-01-01

    Investigated Strong's prediction of the existence of three dimensions of perceived counselor behavior--expertness, attractiveness, and trustworthiness. Films of interviews given by Rogers, Ellis, and Perls were watched by 202 subjects, who rated each counselor on 36 bipolar scales. Results supported the existence of the hypothesized dimensions for…

  6. Dynamic Subcellular Localization of Iron during Embryo Development in Brassicaceae Seeds

    PubMed Central

    Ibeas, Miguel A.; Grant-Grant, Susana; Navarro, Nathalia; Perez, M. F.; Roschzttardtz, Hannetz

    2017-01-01

    Iron is an essential micronutrient for plants. Little is know about how iron is loaded in embryo during seed development. In this article we used Perls/DAB staining in order to reveal iron localization at the cellular and subcellular levels in different Brassicaceae seed species. In dry seeds of Brassica napus, Nasturtium officinale, Lepidium sativum, Camelina sativa, and Brassica oleracea iron localizes in vacuoles of cells surrounding provasculature in cotyledons and hypocotyl. Using B. napus and N. officinale as model plants we determined where iron localizes during seed development. Our results indicate that iron is not detectable by Perls/DAB staining in heart stage embryo cells. Interestingly, at torpedo development stage iron localizes in nuclei of different cells type, including integument, free cell endosperm and almost all embryo cells. Later, iron is detected in cytoplasmic structures in different embryo cell types. Our results indicate that iron accumulates in nuclei in specific stages of embryo maturation before to be localized in vacuoles of cells surrounding provasculature in mature seeds. PMID:29312417

  7. The HEASARC Swift Gamma-Ray Burst Archive: The Pipeline and the Catalog

    NASA Technical Reports Server (NTRS)

    Donato, Davide; Angelini, Lorella; Padgett, C.A.; Reichard, T.; Gehrels, Neil; Marshall, Francis E.; Sakamoto, Takanori

    2012-01-01

    Since its launch in late 2004, the Swift satellite triggered or observed an average of one gamma-ray burst (GRB) every 3 days, for a total of 771 GRBs by 2012 January. Here, we report the development of a pipeline that semi automatically performs the data-reduction and data-analysis processes for the three instruments on board Swift (BAT, XRT, UVOT). The pipeline is written in Perl, and it uses only HEAsoft tools and can be used to perform the analysis of a majority of the point-like objects (e.g., GRBs, active galactic nuclei, pulsars) observed by Swift. We run the pipeline on the GRBs, and we present a database containing the screened data, the output products, and the results of our ongoing analysis. Furthermore, we created a catalog summarizing some GRB information, collected either by running the pipeline or from the literature. The Perl script, the database, and the catalog are available for downloading and querying at the HEASARC Web site.

  8. pacce: Perl algorithm to compute continuum and equivalent widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-08-01

    We present Perl Algorithm to Compute continuum and Equivalent Widths ( pacce). We describe the methods used in the computations and the requirements for its usage. We compare the measurements made with pacce and "manual" ones made using iraf splot task. These tests show that for synthetic simple stellar population (SSP) models the equivalent widths strengths are very similar (differences ≲0.2 Å) for both measurements. In real stellar spectra, the correlation between both values is still very good, but with differences of up to 0.5 Å. pacce is also able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies. In addition, it is also able to compute the uncertainties in the equivalent widths using photon statistics. The code is made available for the community through the web at http://www.if.ufrgs.br/~riffel/software.html .

  9. The HEASARC Swift Gamma-Ray Burst Archive: The Pipeline and the Catalog

    NASA Astrophysics Data System (ADS)

    Donato, D.; Angelini, L.; Padgett, C. A.; Reichard, T.; Gehrels, N.; Marshall, F. E.; Sakamoto, T.

    2012-11-01

    Since its launch in late 2004, the Swift satellite triggered or observed an average of one gamma-ray burst (GRB) every 3 days, for a total of 771 GRBs by 2012 January. Here, we report the development of a pipeline that semi-automatically performs the data-reduction and data-analysis processes for the three instruments on board Swift (BAT, XRT, UVOT). The pipeline is written in Perl, and it uses only HEAsoft tools and can be used to perform the analysis of a majority of the point-like objects (e.g., GRBs, active galactic nuclei, pulsars) observed by Swift. We run the pipeline on the GRBs, and we present a database containing the screened data, the output products, and the results of our ongoing analysis. Furthermore, we created a catalog summarizing some GRB information, collected either by running the pipeline or from the literature. The Perl script, the database, and the catalog are available for downloading and querying at the HEASARC Web site.

  10. Open source clustering software.

    PubMed

    de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S

    2004-06-12

    We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.

  11. Gestalt Therapy: Student Perceptions of Fritz Perls in "Three Approaches to Psychotherapy"

    ERIC Educational Resources Information Center

    Reilly, Joe; Jacobus, Veronica

    2009-01-01

    The "Three Approaches to Psychotherapy" ("TAP") videotape series introduces students to three major schools of psychotherapy: client-centered therapy, Gestalt therapy, and rational-emotive therapy. A sample of undergraduate students viewed the "TAP" series. The students were surveyed about their observations of…

  12. Applied Computational Chemistry for the Blind and Visually Impaired

    ERIC Educational Resources Information Center

    Wedler, Henry B.; Cohen, Sarah R.; Davis, Rebecca L.; Harrison, Jason G.; Siebert, Matthew R.; Willenbring, Dan; Hamann, Christian S.; Shaw, Jared T.; Tantillo, Dean J.

    2012-01-01

    We describe accommodations that we have made to our applied computational-theoretical chemistry laboratory to provide access for blind and visually impaired students interested in independent investigation of structure-function relationships. Our approach utilizes tactile drawings, molecular model kits, existing software, Bash and Perl scripts…

  13. Higher prices in Jamaica.

    PubMed

    1982-03-01

    Price increases in the Jamaica CSM program went into effect on August 31, 1981. The program began in 1975. While the need for higher prices has been under discussion for the past 3 years, this is the 1st time the requisite approval from the Jamaica Price Commission has been obtained. The Jamaica National Family Planning Board (JNFPB) reports that the Panther 3-pack (condom) is up US$0.15 to US$0.30. Each Perle package (oral contraceptive) was increased by US$0.20. Single cycle Perle now sells for US$0.50, and 3-pack Perle sells for US$1.10. The 6-year price stagnation experienced by the CSM program resulted in a decreasing operational budget as program costs continued to rise. Marketing costs alone during this period escalated by 100-300%. For example, Panther pop-up display cartons cost the project US 16U each in 1975. By 1979 the same product cost US 49U. Newspaper advertisements have increased from the 1975 cost of US$68.00 to nearly $200.00 per placement. The overall inflation rate in Jamaica during the last 5 years has averaged more than 20% annually. In the face of these rising costs, outlet expansion for Perle has been prevented, wholesaler margins have been unavailable, and new retailer training has been discontinued. It is projected that the new prices will result in an annual increased revenues of US$80,000 which will be used to reinstate these essential marketing activities. The JNFPB is also planning to introduce a Panther 12-pack and Panther strips to the CSM product line. According to Marketing Manager Aston Evans, "We believe the public is now ready for this type of packaging" which is scheduled to be available soon. Panther is presently only available in a 3-pack, but annual sales have been steady. The new 12-pack will be stocked on supermarket shelves to provide higher product visibility and wider distribution. The selling price has been set as US$1.20 and is expected to yield a 25% increase in sales during the 1st year. A complete sales promotion and advertising campaign will accompany the 12-pack introduction. The marketing plan for Panther strips emphasizes placement in government and private sector offices and factories throughout the country. In the deep rural areas the strips will be available for sale in shops, bars, nightclubs, and other distribution points. full text

  14. ICCE/ICCAI 2000 Full & Short Papers (Intelligent Tutoring Systems).

    ERIC Educational Resources Information Center

    2000

    This document contains the full and short papers on intelligent tutoring systems (ITS) from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a framework for Internet-based distributed learning; a fuzzy-based assessment for the Perl tutoring…

  15. Disabling Fictions: Institutionalized Delimitations of Revision.

    ERIC Educational Resources Information Center

    Carroll, Jeffrey

    1989-01-01

    Examines three contemporary taxonomies of revision as proposed by Wallace Hildick, Lester Faigley and Stephen Witte, and Sondra Perl. Uses literary and cultural theory to bridge the gap between these theories and students' revision practices. Argues that while revision may be prescriptive, it must also be subordinate to the writer's intentions and…

  16. Client Good Moments: An Intensive Analysis of a Single Session.

    ERIC Educational Resources Information Center

    Stalikas, Anastassios; Fitzpatrick, Marilyn

    1995-01-01

    An intensive analysis of a single counseling session conducted by Fritz Perls was carried out to examine relationships among client experiencing level, client strength of feeling, counselor interventions, and client good moments. The possibility that positive therapeutic outcome is related to the accretion of good moments is discussed. (JBJ)

  17. Leading System-Wide Improvement

    ERIC Educational Resources Information Center

    Harris, Alma

    2012-01-01

    Around the world there is a preoccupation with improving the performance of schools and school systems. Comparisons made between countries through PISA and PERLs have led to a preoccupation, and in some cases, an obsession, with securing a high position in the international league tables. The minds of policy-makers and politicians alike are…

  18. Isozyme variation in lychee (Litchi chinensis Sonn.)

    USDA-ARS?s Scientific Manuscript database

    A genetic diversity analysis involving 49 lychee (Litchi chinensis SOM.) accessions using eight enzyme systems encoding 12 loci (Zdh-I, Zdh-2, Mdh-2, Per-l, Pgi-2, Pgm-1, Pgm-2, Sk& Tpi-1, Tpi-2, Ugpp-1, and Ugpp-2) revealed moderate to high levels of genetic variability. Cluster analysis of the iso...

  19. IT Resources - Betty Petersen Memorial Library

    Science.gov Websites

    Available NOAA-wide. Note: This link takes you to a nonfederal website. Guide for usage Apress.com Free eBooks Free eBooks available through Apress publishers. Topics include PHP, Perl, and programming VB .Net . Note: This link takes you to a nonfederal website. Online Programming Books Free programming books

  20. Advantages of Application of Electronic Commerce in Procurement for the Armed Forces of Brazil and South Korea

    DTIC Science & Technology

    2001-12-01

    Northern Europe.”, Ecommerce Times, [http://www.ecommercetimes.com/perl/story/3546.html], June 2001. Mc Gregor, Don, “Encryption”, [http...sans.org/infosecFAQ/ ecommerce /fraud.htm], September 2001. Schneider, Gary P. and James T. Perry, “Electronic Commerce”, Course Technology, 2001

  1. Development and Evaluation of a Thai Learning System on the Web Using Natural Language Processing.

    ERIC Educational Resources Information Center

    Dansuwan, Suyada; Nishina, Kikuko; Akahori, Kanji; Shimizu, Yasutaka

    2001-01-01

    Describes the Thai Learning System, which is designed to help learners acquire the Thai word order system. The system facilitates the lessons on the Web using HyperText Markup Language and Perl programming, which interfaces with natural language processing by means of Prolog. (Author/VWL)

  2. An Investigation of How ESL Students Write.

    ERIC Educational Resources Information Center

    Raimes, Ann

    A study is described which investigated the differences between English-as-a-second-language (ESL) writers and native-English-speaking writers and examined closely a range of ESL writers and their composing processes. The procedures used were those used by Sondra Perl in a study of the composing processes of unskilled college writers (1979).…

  3. Relentless Verity: Education for Being-Becoming-Belonging.

    ERIC Educational Resources Information Center

    Kidd, James Robbins

    The dynamic relationship of the concepts of being, becoming, and belonging is and must be the heart and central goal of adult education. The concept can be understood most readily by examination of the writings of humanist psychologists such as Carl Rogers, Fritz Perls, Gordon Allport, and Abraham Maslow. Some characteristics or dimensions of an…

  4. Comparison of effects of glass fibre and glass powder on guinea-pig lungs

    PubMed Central

    Botham, Susan K.; Holt, P. F.

    1973-01-01

    Botham, Susan K., and Holt, P. F. (1973).British Journal of Industrial Medicine,30, 232-236. Comparison of effects of glass fibre and glass powder on guinea-pig lungs. Following 24 hours inhalation by guinea-pigs of powdered glass dust, the pulmonary effects over the succeeding month differed from those previously observed to follow inhalation of glass fibre in that (1) fewer erythrocytes escaped from the capillaries, (2) very few giant cells were produced, (3) erythrocytes and intracellular glass particles were cleared more readily because junctions between respiratory and terminal bronchioles were not blocked by giant cells, (4) intracellular granules containing Perls-positive material did not appreciably increase in number or intensity of staining during the month, and (5) particles were not coated with Perls-positive material during the time that pseudo-asbestos bodies would be formed from glass fibres. The difference between the effects of chemically similar glass powder and fibre during a month in a guinea-pig lung is considered to be due to the morphology of the inhaled particle. Images PMID:4124978

  5. TREE2FASTA: a flexible Perl script for batch extraction of FASTA sequences from exploratory phylogenetic trees.

    PubMed

    Sauvage, Thomas; Plouviez, Sophie; Schmidt, William E; Fredericq, Suzanne

    2018-03-05

    The body of DNA sequence data lacking taxonomically informative sequence headers is rapidly growing in user and public databases (e.g. sequences lacking identification and contaminants). In the context of systematics studies, sorting such sequence data for taxonomic curation and/or molecular diversity characterization (e.g. crypticism) often requires the building of exploratory phylogenetic trees with reference taxa. The subsequent step of segregating DNA sequences of interest based on observed topological relationships can represent a challenging task, especially for large datasets. We have written TREE2FASTA, a Perl script that enables and expedites the sorting of FASTA-formatted sequence data from exploratory phylogenetic trees. TREE2FASTA takes advantage of the interactive, rapid point-and-click color selection and/or annotations of tree leaves in the popular Java tree-viewer FigTree to segregate groups of FASTA sequences of interest to separate files. TREE2FASTA allows for both simple and nested segregation designs to facilitate the simultaneous preparation of multiple data sets that may overlap in sequence content.

  6. BiDiBlast: comparative genomics pipeline for the PC.

    PubMed

    de Almeida, João M G C F

    2010-06-01

    Bi-directional BLAST is a simple approach to detect, annotate, and analyze candidate orthologous or paralogous sequences in a single go. This procedure is usually confined to the realm of customized Perl scripts, usually tuned for UNIX-like environments. Porting those scripts to other operating systems involves refactoring them, and also the installation of the Perl programming environment with the required libraries. To overcome these limitations, a data pipeline was implemented in Java. This application submits two batches of sequences to local versions of the NCBI BLAST tool, manages result lists, and refines both bi-directional and simple hits. GO Slim terms are attached to hits, several statistics are derived, and molecular evolution rates are estimated through PAML. The results are written to a set of delimited text tables intended for further analysis. The provided graphic user interface allows a friendly interaction with this application, which is documented and available to download at http://moodle.fct.unl.pt/course/view.php?id=2079 or https://sourceforge.net/projects/bidiblast/ under the GNU GPL license. Copyright 2010 Beijing Genomics Institute. Published by Elsevier Ltd. All rights reserved.

  7. New insights into Fe localization in plant tissues

    PubMed Central

    Roschzttardtz, Hannetz; Conéjéro, Geneviève; Divol, Fanchon; Alcon, Carine; Verdeil, Jean-Luc; Curie, Catherine; Mari, Stéphane

    2013-01-01

    Deciphering cellular iron (Fe) homeostasis requires having access to both quantitative and qualitative information on the subcellular pools of Fe in tissues and their dynamics within the cells. We have taken advantage of the Perls/DAB Fe staining procedure to perform a systematic analysis of Fe distribution in roots, leaves and reproductive organs of the model plant Arabidopsis thaliana, using wild-type and mutant genotypes affected in iron transport and storage. Roots of soil-grown plants accumulate iron in the apoplast of the central cylinder, a pattern that is strongly intensified when the citrate effluxer FRD3 is not functional, thus stressing the importance of citrate in the apoplastic movement of Fe. In leaves, Fe level is low and only detected in and around vascular tissues. In contrast, Fe staining in leaves of iron-treated plants extends in the surrounding mesophyll cells where Fe deposits, likely corresponding to Fe-ferritin complexes, accumulate in the chloroplasts. The loss of ferritins in the fer1,3,4 triple mutant provoked a massive accumulation of Fe in the apoplastic space, suggesting that in the absence of iron buffering in the chloroplast, cells activate iron efflux and/or repress iron influx to limit the amount of iron in the cell. In flowers, Perls/DAB staining has revealed a major sink for Fe in the anthers. In particular, developing pollen grains accumulate detectable amounts of Fe in small-size intracellular bodies that aggregate around the vegetative nucleus at the binuclear stage and that were identified as amyloplasts. In conclusion, using the Perls/DAB procedure combined to selected mutant genotypes, this study has established a reliable atlas of Fe distribution in the main Arabidopsis organs, proving and refining long-assumed intracellular locations and uncovering new ones. This “iron map” of Arabidopsis will serve as a basis for future studies of possible actors of iron movement in plant tissues and cell compartments. PMID:24046774

  8. Regional strategy tested in Caribbean.

    PubMed

    1984-01-01

    Barbados, St. Vincent, and St. Lucia have joined forces in the world's 1st regional Contraceptive Social Marketing (CSM) effort -- the Caribbean CSM. The Barbados Family Planning Association (BFPS) is overseeing the operation, which begins selling 2 contraceptive pills and a condom in early February. Costs and start-up times were shaved by adopting brand names and advertising materials from Jamaica's highly successful CSM project. Jamaica's popular "Panther" condom and "Perle" oral contraceptive (OC) are being used by the Caribbean CSM project. Perle's 9-year-old package has been redesigned and the Caribbean CSM project also is selling a 2nd, low-dose version called "Perle-LD." The products are manufactured in the US by Syntex as Noriday and Norminest, respectively. But the regional approach's financial gains also had a debit side, most notably a tripling of bureaucratic procedures. Part of project difficulties stem from differences among the 3 Caribbean countries. While sharing a common cultural heritage, St. Lucians speak a patois dialect in addition to the English prevalent on the other islands. The biggest hurdle was overcoming an economic disparity between Barbados and its less affluent neighbors, St. Vincent and St. Lucia. The CSM project decided to try a 2-tier product pricing strategy. In US currency, prices run $1.75 per cycle for both OCs on Barbados, but $1.26 on St. Vincent and St. Lucia. A Panther 3-pack costs 75 cents on Barbados and 42 cents on the othe 2 islands. The project is being promoted with generic family planning media advertisements. The project also has held physician orientation seminars on each island. The pilot program will be accompanied by retailer training seminars. In addition the project may introduce a spermicidal foaming tablet, once the US Food and Drug Administration approvs a new American-made product. The unique Caribbean CSM project may spread an idea as potent as the family planning message. Its success could transmit the regional concept worldwide, helping small nations to slow the rate of their population growth.

  9. Multiple Objective Evaluation and Choicemaking under Risk with partial Preference Information.

    DTIC Science & Technology

    1981-02-01

    6I. PERlORUl, O RG n. REP[IORT NUMOIER Chelsea C./White," t.. 4-0 -C,6542 ~-Andrew P ,/Sage S. PERFORMING ORGANIZATION NAME ANO ADORES$ 10, POGAM...Vickson, R. G., "Theoretical Foundations of Stochastic Dominance," Chapter 2 in Whitmore , G. A., and Findlay, M. C. (eds), Stochastic Dominance: An

  10. What's new in the Atmospheric Model Evaluation Tool (AMET) version 1.3

    EPA Science Inventory

    A new version of the Atmospheric Model Evaluation Tool (AMET) has been released. The new version of AMET, version 1.3 (AMETv1.3), contains a number of updates and changes from the previous of version of AMET (v1.2) released in 2012. First, the Perl scripts used in the previous ve...

  11. Research as a Recursive Process: Reconsidering "The Composing Processes of Unskilled College Writers" 35 Years Later

    ERIC Educational Resources Information Center

    Perl, Sondra

    2014-01-01

    This article describes Sondra Perl's retrospective review of the composing processes of unskilled college writers and whether her assumptions and values in the designing of research projects have changed over her long teaching career. She uses her college dissertation "Five Writers Writing" as the basis to reflect on the authors and…

  12. Web-Based Search and Plot System for Nuclear Reaction Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otuka, N.; Nakagawa, T.; Fukahori, T.

    2005-05-24

    A web-based search and plot system for nuclear reaction data has been developed, covering experimental data in EXFOR format and evaluated data in ENDF format. The system is implemented for Linux OS, with Perl and MySQL used for CGI scripts and the database manager, respectively. Two prototypes for experimental and evaluated data are presented.

  13. Categories of Counselors Behavior as Defined from Cross-Validated Factoral Descriptions.

    ERIC Educational Resources Information Center

    Zimmer, Jules M.; And Others

    The intent of the study was to explore and categorize counselor responses. Three separate filmed presentations were shown. Participating with the same client were Albert Ellis, Frederick Perls, and Carl Rogers. At the beginning of each counselor statement, a number was inserted in sequence and remained on the videotape until completion of that…

  14. Foreign Language Analysis and Recognition (FLARe) Initial Progress

    DTIC Science & Technology

    2012-11-29

    University Language Modeling ToolKit CoMMA Count Mediated Morphological Analysis CRUD Create, Read , Update & Delete CPAN Comprehensive Perl Archive...DATES COVERED (From - To) 1 October 2010 – 30 September 2012 4. TITLE AND SUBTITLE Foreign Language Analysis and Recognition (FLARe) Initial Progress...AFRL-RH-WP-TR-2012-0165 FOREIGN LANGUAGE ANALYSIS AND RECOGNITION (FLARE) INITIAL PROGRESS Brian M. Ore

  15. Storytelling as Scholarship: A Writerly Approach to Research

    ERIC Educational Resources Information Center

    Perl, Sondra; Counihan, Beth; McCormack, Tim; Schnee, Emily

    2007-01-01

    What does it mean to take a writerly approach to research? Sondra Perl and her co-authors have pondered this question over the past five years as they have each worked with her to design and draft dissertations that combine their work as literacy researchers with their love of writing. Each of them has moved toward storytelling as a compelling and…

  16. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  17. Looking for One's Shadow at Noon: Vol. II. Finding the Self in School and Community.

    ERIC Educational Resources Information Center

    Leue, Mary M.

    This volume contains a collection of essays, reflections, and other writings (many of which originally appeared in several journals) on the relations among self and school and community. The first selection is an obituary of Fritz Perls, a leader of Gestalt therapy. The second essay, "A Social and Political Reassessment of the Work of Wilhelm…

  18. A Comprehensive Reasoning Framework for Information Survivability (User Intent Encapsulation and Reasoning About Intrusion: Implementation and Performance Assessment)

    DTIC Science & Technology

    2006-08-01

    obvious and apparent attacks such as transferring the /etc/ passwd file from one host to another, password-cracking by comparing the entries in the /etc... passwd file to entries in another file, using a dictionary file for the same, and exploiting the vulnerabilities such as rdist, perl 5.0.1, etc. The

  19. Data Reduction of Jittered Infrared Images Using the ORAC Pipeline

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm; Wright, Gillian; Bridger, Alan; Economou, Frossie

    We relate our experiences using the ORAC data reduction pipeline for jittered images of stars and galaxies. The reduction recipes currently combine applications from several Starlink packages with intelligent Perl recipes to cater to UKIRT data. We describe the recipes and some of the algorithms used, and compare the quality of the resultant mosaics and photometry with the existing facilities.

  20. 67 FR 24495 - United States v. Microsoft Corporation; Public Comments; Notice (MTC-00003461 - MTC-00007629)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2002-05-03

    ... (web-serving software), Linux, Perl, and those who are building a compatible & free version of MS`s..., Argument from Design Argument from Design-Web & Multimedia [email protected] http://www.ardes.com MTC-00003464... organization could be a good target for this effort. Their web address is http:// www.gnu.org/. This effort...

  1. A comparison of common programming languages used in bioinformatics.

    PubMed

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  2. A comparison of common programming languages used in bioinformatics

    PubMed Central

    Fourment, Mathieu; Gillings, Michael R

    2008-01-01

    Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993

  3. Solar project description for Perl-Mack Enterprises' single family residences, Denver, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-08-21

    The Perl-Mack Enterprises Co. solar energy system(s) are installed in a total of 25 single family dwellings located in Denver, Colorado. The 25 dwellings are of three different configurations. Two of the twenty-five dwellings have been fully instrumented for performance monitoring and evaluation since September 1977. All the solar systems are designed to provide approximately 69 percent of the space heating and energy requirements for each dwelling. Solar energy is collected by an array of flat plate collectors having a gross area of 470 square feet. A water-glycol mixture is used as the medium for delivering solar heat from themore » collection to the storage tank. The storage tank has a total capacity of 945 gallons. A liquid-to-liquid heat exchanger, within the storage tank, transfers the stored heat from the transfer medium to the domestic hot water tank of the house. Space heating demands are met by circulating heated water/glycol mixture from the storage tank through the heat exchanger coil installed downstream from the auxiliary furnace blower. The auxiliary gas-fired furnace is activated whenever room thermostat demands heat.« less

  4. LavaNet—Neural network development environment in a general mine planning package

    NASA Astrophysics Data System (ADS)

    Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.

    2011-04-01

    LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.

  5. Tapir: A web interface for transit/eclipse observability

    NASA Astrophysics Data System (ADS)

    Jensen, Eric

    2013-06-01

    Tapir is a set of tools, written in Perl, that provides a web interface for showing the observability of periodic astronomical events, such as exoplanet transits or eclipsing binaries. The package provides tools for creating finding charts for each target and airmass plots for each event. The code can access target lists that are stored on-line in a Google spreadsheet or in a local text file.

  6. An Ontology for Insider Threat Indicators Development and Applications

    DTIC Science & Technology

    2014-11-01

    An Ontology for Insider Threat Indicators Development and Applications Daniel L. Costa , Matthew L. Collins, Samuel J. Perl, Michael J. Albrethsen...services, commit fraud against an organization, steal intellectual property, or conduct national security espionage, sabotaging systems and data, as...engineering plans from the victim organization’s computer systems to his new employer.  The insider accessed a web server with an administrator account

  7. DefEX: Hands-On Cyber Defense Exercise for Undergraduate Students

    DTIC Science & Technology

    2011-07-01

    Injection, and 4) File Upload. Next, the students patched the associated flawed Perl and PHP Hypertext Preprocessor ( PHP ) code. Finally, students...underlying script. The Zora XSS vulnerability existed in a PHP file that echoed unfiltered user input back to the screen. To eliminate the...vulnerability, students filtered the input using the PHP htmlentities function and retested the code. The htmlentities function translates certain ambiguous

  8. Study of Tools for Network Discovery and Network Mapping

    DTIC Science & Technology

    2003-11-01

    connected to the switch. iv. Accessibility of historical data and event data In general, network discovery tools keep a history of the collected...has the following software dependencies: - Java Virtual machine 76 - Perl modules - RRD Tool - TomCat - PostgreSQL STRENGTHS AND...systems - provide a simple view of the current network status - generate alarms on status change - generate history of status change VISUAL MAP

  9. Nonhuman Primates are Protected from Smallpox Virus or Monkeypox Virus Challenges by the Antiviral Drug ST-246

    DTIC Science & Technology

    2009-06-01

    Drug ST-246 John Huggins,1 Arthur Goff,1 Lisa Hensley,1 Eric Mucker,1 Josh Shamblin,1 Carly Wlazlowski,1 Wendy Johnson,1 Jennifer Chapman,1 Tom Larsen...Hauer, M. Layton , J. McDade, M. T. Osterholm, T. O’Toole, G. Parker, T. Perl, P. K. Russell, K. Tonat, and the Working Group on Civilian Biodefense

  10. A Non-Invasive Technique which Demonstrates the Iron in the Buccal Mucosa of Sickle Cell Anaemia and Thalassaemia Patients who Undergo Repeated Blood Transfusions.

    PubMed

    Chittamsetty, Harika; Sekhar, M S Muni; Ahmed, Syed Afroz; Suri, Charu; Palla, Sridevi; Venkatesh, S Muni; Tanveer, Shahela

    2013-06-01

    Iron is vital for all the living organisms. However, excess iron is hazardous because it produces free radical formation. Therefore, the iron absorption is carefully regulated to maintain an equilibrium between the absorption and the body loss of iron. Considering the lack of specific excretory pathways for iron in humans, an iron overload in the tissues is frequently encountered. It can be precipitated by a variety of conditions such as increased iron absorption, as is seen in haemochromatosis or a frequent parenteral iron administration, as is seen in thalassaemia and sickle cell anaemia patients (a transfusional overload). To demonstrate the iron overload at an early stage by oral exfoliative cytology in the oral mucosal cells of thalassaemia and sickle cell anaemia patients and to compare the presence of iron in the exfoliated oral epithelial cells with that of the serum ferritin levels in those patients. The present study comprised of 40 β- thalassaemia major and 20 sickle cell anaemia patients who were undergoing repeated blood transfusions of a minimum of 15/more, along with 60 clinically healthy individuals. Scrapings were obtained from the buccal mucosa and they were smeared onto glass slides. Then the slides were stained with a Perl's Prussian staining kit and they were examined under a light microscope. 72.5% of the thalassaemia patients and 35% of the sickle cell anaemia patients revealed a positivity for the Perl's Prussian blue reaction and none of the controls showed this positivity. It was also observed that as the serum ferritin levels increased, the iron overload in the oral mucosal cells of the thalassaemia patients also increased, which was not statistically significant, whereas it was statistically significant in case of the sickle cell anemia patients. Since the exfoliative cytology is a simple, painless, non-invasive and a quick procedure to perform, a lot of research should be carried out on the correlation of the Perl's Prussian blue reaction to the serum ferritin levels.

  11. Resources for comparing the speed and performance of medical autocoders.

    PubMed

    Berman, Jules J

    2004-06-15

    Concept indexing is a popular method for characterizing medical text, and is one of the most important early steps in many data mining efforts. Concept indexing differs from simple word or phrase indexing because concepts are typically represented by a nomenclature code that binds a medical concept to all equivalent representations. A concept search on the term renal cell carcinoma would be expected to find occurrences of hypernephroma, and renal carcinoma (concept equivalents). The purpose of this study is to provide freely available resources to compare speed and performance among different autocoders. These tools consist of: 1) a public domain autocoder written in Perl (a free and open source programming language that installs on any operating system); 2) a nomenclature database derived from the unencumbered subset of the publicly available Unified Medical Language System; 3) a large corpus of autocoded output derived from a publicly available medical text. A simple lexical autocoder was written that parses plain-text into a listing of all 1,2,3, and 4-word strings contained in text, assigning a nomenclature code for text strings that match terms in the nomenclature. The nomenclature used is the unencumbered subset of the 2003 Unified Medical Language System (UMLS). The unencumbered subset of UMLS was reduced to exclude homonymous one-word terms and proper names, resulting in a term/code data dictionary containing about a half million medical terms. The Online Mendelian Inheritance in Man (OMIM), a 92+ Megabyte publicly available medical opus, was used as sample medical text for the autocoder. The autocoding Perl script is remarkably short, consisting of just 38 command lines. The 92+ Megabyte OMIM file was completely autocoded in 869 seconds on a 2.4 GHz processor (less than 10 seconds per Megabyte of text). The autocoded output file (9,540,442 bytes) contains 367,963 coded terms from OMIM and is distributed with this manuscript. A public domain Perl script is provided that can parse through plain-text files of any length, matching concepts against an external nomenclature. The script and associated files can be used freely to compare the speed and performance of autocoding software.

  12. A Multi Agent System for Flow-Based Intrusion Detection

    DTIC Science & Technology

    2013-03-01

    Student t-test, as it is less likely to spuriously indicate significance because of the presence of outliers [128]. We use the MATLAB ranksum function [77...effectiveness of self-organization and “ entangled hierarchies” for accomplishing scenario objectives. One of the interesting features of SOMAS is the ability...cross-validation and automatic model selection. It has interfaces for Java, Python, R, Splus, MATLAB , Perl, Ruby, and LabVIEW. Kernels: linear

  13. Image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, Amber; Harsch, Tim; Pitt, Julie

    2007-08-31

    The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.

  14. A Disk-Based System for Producing and Distributing Science Products from MODIS

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael

    2007-01-01

    Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.

  15. PAL: A Positional Astronomy Library

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Berry, D. S.

    2013-10-01

    PAL is a new positional astronomy library written in C that attempts to retain the SLALIB API but is distributed with an open source GPL license. The library depends on the IAU SOFA library wherever a SOFA routine exists and uses the most recent nutation and precession models. Currently about 100 of the 200 SLALIB routines are available. Interfaces are also available from Perl and Python. PAL is freely available via github.

  16. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  17. Spiders and Camels and Sybase! Oh, My!

    NASA Astrophysics Data System (ADS)

    Barg, Irene; Ferro, Anthony J.; Stobie, Elizabeth

    The Hubble Space Telescope NICMOS Guaranteed Time Observers (GTOs) requested a means of sharing point spread function (PSF) observations. Because of the specifics of the instrument, these PSFs are very useful in the analysis of observations and can vary with the conditions on the telescope. The GTOs are geographically diverse, so a centralized processing solution would not work. The individual PSF observations were reduced by different people, at different institutions, using different reduction software. These varied observations had to be combined into a single database and linked to other information as well. The NICMOS software group at the University of Arizona developed a solution based on a World Wide Web (WWW) interface, using Perl/CGI forms to query the submitter about the PSF data to be entered. After some semi-automated sanity checks, using the FTOOLS package, the metadata are then entered into a Sybase relational database system. A user of the system can then query the database, again through a WWW interface, to locate and retrieve PSFs which may match their observations, as well as determine other information regarding the telescope conditions at the time of the observations (e.g., the breathing parameter). This presentation discusses some of the driving forces in the design, problems encountered, and the choices made. The tools used, including Sybase, Perl, FTOOLS, and WWW elements are also discussed.

  18. Quality assessment of protein model-structures using evolutionary conservation.

    PubMed

    Kalman, Matan; Ben-Tal, Nir

    2010-05-15

    Programs that evaluate the quality of a protein structural model are important both for validating the structure determination procedure and for guiding the model-building process. Such programs are based on properties of native structures that are generally not expected for faulty models. One such property, which is rarely used for automatic structure quality assessment, is the tendency for conserved residues to be located at the structural core and for variable residues to be located at the surface. We present ConQuass, a novel quality assessment program based on the consistency between the model structure and the protein's conservation pattern. We show that it can identify problematic structural models, and that the scores it assigns to the server models in CASP8 correlate with the similarity of the models to the native structure. We also show that when the conservation information is reliable, the method's performance is comparable and complementary to that of the other single-structure quality assessment methods that participated in CASP8 and that do not use additional structural information from homologs. A perl implementation of the method, as well as the various perl and R scripts used for the analysis are available at http://bental.tau.ac.il/ConQuass/. nirb@tauex.tau.ac.il Supplementary data are available at Bioinformatics online.

  19. Some Effects of Compressibility on the Flow Through Fans and Turbines

    DTIC Science & Technology

    1945-08-01

    conditions, or the velocity diagram, for the cascade of airfoils representing a fan or a turbine - blade arrangement (fig. 1). The conservation laws...Compressibility on the Flow Through Fans and Turbines AUTHOR(S); Perl. W.j Epstein, H.T. ORIGINATING AGENCY: Aircraft Engine Research Lab., Cleveland, O... turbine blading . It appears, however, that use of a suitable polytropic exponent n?^7 allows direct application in many cases.) Substitution of

  20. How Transformational Learning Promotes Caring, Consultation and Creativity, and Ultimately Contributes to Sustainable Development: Lessons from the Partnership for Education and Research about Responsible Living (PERL) Network

    ERIC Educational Resources Information Center

    Thoresen, Victoria Wyszynski

    2017-01-01

    Oases of learning which are transformative and lead to significant behavioural change can be found around the globe. Transformational learning has helped learners not only to understand what they have been taught but also to re-conceptualise and re-apply this understanding to their daily lives. Unfortunately, as many global reports indicate,…

  1. Simple Ontology Format (SOFT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokine, Alexandre

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.

  2. The American History of PTSD: Civil War -- Vietnam

    DTIC Science & Technology

    2011-03-21

    ABSTRACTION FROM, OR REPRODUCTION OF ALL OR ANY PART OF TillS DOCUI\\1ENT IS PERl\\flTTED PROVIDED PROPER ACKNOWLEDGEI\\1ENT IS MADE. Acknowledgments I would...treatment procedure for "shell shock" involved administering a hypnotic therapy designed to restore the victim’s memory through the trancelike repetition and...of the most effective methods of ensuring long lasting recovery. However, the negative cmmotations associated with hypnotism ensured that only a

  3. Playing Detective: Reconstructing Software Architecture from Available Evidence

    DTIC Science & Technology

    1997-10-01

    information • PostgreSQL (based on POSTGRES [Stonebraker 90]) for model storage • IAPR [Kazman 96c], RMTool [Murphy 95], and Perl for analysis and...720-741. Stonebraker, M.; Rowe, L; & Hirohama, M. ’The Implementation of POSTGRES ." IEEE Transactions on Knowledge and Data Engineering 2,1 (March...Engineering 19,7 (July 1993): 720-741. Stonebraker, M.; Rowe, L; & Hirohama, M. "The Implementation of POSTGRES ." IEEE Transactions on Knowledge and

  4. A Public/Private Extension of Conway's Accessor Model

    NASA Technical Reports Server (NTRS)

    McCann, Karen M.; Yarrow, Maurice

    2000-01-01

    We present a new object-oriented model for a Perl package, based on Damien Conway's 'accessor' model. Our model includes both public and private data; it uses strategies to reduce a package namespace, but still maintains a robust and error-trapped approach. With this extended model we can make any package data or functions 'private', as well as 'public'. (Note: 'namespace' in this context means all the names, variables and subs, associated with a package.)

  5. Abstracts of the 15th Annual Meeting of the Israel Society for Neuroscience Eilat, Israel, December 3–5, 2006

    PubMed Central

    2007-01-01

    The Israel Society for Neuroscience (ISFN) was founded in 1993 by a group of Israeli leading scientists conducting research in the area of neurobiology. The primary goal of the society was to promote and disseminate the knowledge and understanding acquired by its members, and to strengthen interactions between them. Since then, the society holds its annual meeting every year in Eilat during the month of December. At these annual meetings the senior Israeli neurobiologists, their teams, and their graduate students, as well as foreign scientists and students, present their recent research findings in platform and poster presentations. The meeting also offers the opportunity for the researchers to exchange information with each other, often leading to the initiation of collaborative studies. Both the number of members of the society and of those participating in the annual meeting is constantly increasing, and it is anticipated that this year about 600 scientists will convene at the Princess Hotel in Eilat, Israel. Further information concerning the Israel Society for Neuroscience can be found at http://www.isfn.org.il. Committee: Zvi Wollberg (President) Tel Aviv University Edi Barkai University of Haifa Etti Grauer Israel Institute for Biological Research, Ness Ziona Yoram Rami Grossman Ben Gurion University of the Negev Yoel Yaari Hebrew University of Jerusalem Gal Yadid Bar-Ilan University Shlomo Rotshenker (President Elect) Hebrew University of Jerusalem Ettie Grauer (Treasurer) Israel Institute for Biological Research, Ness Ziona Michal Gilady (Administrator) Rishon Le Zion

  6. Reengineering Workflow for Curation of DICOM Datasets.

    PubMed

    Bennett, William; Smith, Kirk; Jarosz, Quasar; Nolan, Tracy; Bosch, Walter

    2018-06-15

    Reusable, publicly available data is a pillar of open science and rapid advancement of cancer imaging research. Sharing data from completed research studies not only saves research dollars required to collect data, but also helps insure that studies are both replicable and reproducible. The Cancer Imaging Archive (TCIA) is a global shared repository for imaging data related to cancer. Insuring the consistency, scientific utility, and anonymity of data stored in TCIA is of utmost importance. As the rate of submission to TCIA has been increasing, both in volume and complexity of DICOM objects stored, the process of curation of collections has become a bottleneck in acquisition of data. In order to increase the rate of curation of image sets, improve the quality of the curation, and better track the provenance of changes made to submitted DICOM image sets, a custom set of tools was developed, using novel methods for the analysis of DICOM data sets. These tools are written in the programming language perl, use the open-source database PostgreSQL, make use of the perl DICOM routines in the open-source package Posda, and incorporate DICOM diagnostic tools from other open-source packages, such as dicom3tools. These tools are referred to as the "Posda Tools." The Posda Tools are open source and available via git at https://github.com/UAMS-DBMI/PosdaTools . In this paper, we briefly describe the Posda Tools and discuss the novel methods employed by these tools to facilitate rapid analysis of DICOM data, including the following: (1) use a database schema which is more permissive, and differently normalized from traditional DICOM databases; (2) perform integrity checks automatically on a bulk basis; (3) apply revisions to DICOM datasets on an bulk basis, either through a web-based interface or via command line executable perl scripts; (4) all such edits are tracked in a revision tracker and may be rolled back; (5) a UI is provided to inspect the results of such edits, to verify that they are what was intended; (6) identification of DICOM Studies, Series, and SOP instances using "nicknames" which are persistent and have well-defined scope to make expression of reported DICOM errors easier to manage; and (7) rapidly identify potential duplicate DICOM datasets by pixel data is provided; this can be used, e.g., to identify submission subjects which may relate to the same individual, without identifying the individual.

  7. Automated Alerting for Black Hole Routing

    DTIC Science & Technology

    2007-09-01

    Beginners or Learning Bro). Reply: Just the documentation that comes with it and is available from the wiki. Question 4: Is Bro compatible with...other scripts written in Python , Java, or Perl? Reply: It can call arbitrary programs but doesn’t link directly into other interpreters. Question 5...need to write that daemon? (C) What kind of scripts does Spunk support…and are Python and C part of them or not? 88 Reply: Puri, the answers to all

  8. Analysis of the Impact of Data Normalization on Cyber Event Correlation Query Performance

    DTIC Science & Technology

    2012-03-01

    2003). Organizations use it in planning, target marketing , decision-making, data analysis, and customer services (Shin, 2003). Organizations that...Following this IP address is a router message sequence number. This is a globally unique number for each router terminal and can range from...Appendix G, invokes the PERL parser for the log files from a particular USAF base, and invokes the CTL file that loads the resultant CSV file into the

  9. Software Assurance Measurement -- State of the Practice

    DTIC Science & Technology

    2013-11-01

    quality and productivity. 30+ languages, C/C++, Java , .NET, Oracle, PeopleSoft, SAP, Siebel, Spring, Struts, Hibernate , and all major databases. ChecKing...NET 39 ActionScript 39 Ada 40 C/C++ 40 Java 41 JavaScript 42 Objective-C 42 Opa 42 Packages 42 Perl 42 PHP 42 Python 42 Formal Methods...Suite—A tool for Ada, C, C++, C#, and Java code that comprises various analyses such as architecture checking, interface analyses, and clone detection

  10. Introduction to Python for CMF Authority Users

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritchett-Sheats, Lori A.

    This talk is a very broad over view of Python that highlights key features in the language used in the Common Model Framework (CMF). I assume that the audience has some programming experience in a shell scripting language (C shell, Bash, PERL) or other high level language (C/C++/ Fortran). The talk will cover Python data types, classes (objects) and basic programming constructs. The talk concludes with slides describing how I developed the basic classes for a TITANS homework assignment.

  11. The Cascading Impacts of Technology Selection: Incorporating Ruby on Rails into ECHO

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Cechini, M.

    2010-12-01

    NASA’s Earth Observing System (EOS) ClearingHOuse (ECHO) is a SOA based Earth Science Data search and order system implemented in Java with one significant exception: the web client used by 98% of our users is written in Perl. After several decades of maintenance the Perl based application had reached the end of its serviceable life and ECHO was tasked with implementing a replacement. Despite a broad investment in Java, the ECHO team conducted a survey of modern development technologies including Flex, Python/Django, JSF2/Spring and Ruby on Rails. The team ultimately chose Ruby on Rails (RoR) with Cucumber for testing due to its perceived applicability to web application development and corresponding development efficiency gains. Both positive and negative impacts on the entire ECHO team, including our stakeholders, were immediate and sometimes subtle. The technology selection caused shifts in our architecture and design, development and deployment procedures, requirement definition approach, testing approach, and, somewhat surprisingly, our project team structure and software process. This presentation discusses our experiences, including technical, process, and psychological, using RoR on a production system. During this session we will discuss: - Real impacts of introducing a dynamic language to a Java team - Real and perceived efficiency advantages - Impediments to adoption and effectiveness - Impacts of transition from Test Driven Development to Behavior Driven Development - Leveraging Cucumber to provide fully executable requirement documents - Impacts on team structure and roles

  12. Martin L. Perl (1927-2014): A Biographical Memoir

    NASA Astrophysics Data System (ADS)

    Feldman, Gary; Jaros, John; Schindler, Rafe H.

    2017-10-01

    Particle physicist Martin Lewis Perl was recognized worldwide for his discovery of the τ (tau) lepton. For that achievement he received the 1982 Wolf Prize and shared the 1995 Nobel Prize in Physics. He was also a Fellow of the American Physical Society and a member of the National Academy of Sciences (elected 1981). Martin's distinctive approach to scientific investigation had its origins in his upbringing and in the influence of I. I. Rabi, his graduate advisor at Columbia University. After coming to Stanford University in 1963, Martin sought to understand why there should be two and only two families of leptons: the electron and its associated neutrino; and the muon and the muon neutrino. His discovery of the τ provided evidence for a third family of fundamental leptons. The bottom quark was discovered shortly afterward at the Fermi National Accelerator Laboratory, providing evidence for a third family of quarks. Direct evidence for the τ neutrino came later, thereby completing the third lepton generation, while the discovery of the top quark in 1995 completed the third generation of quarks. These achievements established leptons and quarks as fundamental constituents of matter and, along with the fundamental forces, provided the experimental basis of the "Standard Model," our picture of how all matter is made up and how its components interact. Why there are three and only three families of leptons and quarks remains an unsolved mystery to this day.

  13. Automated radiotherapy treatment plan integrity verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Deshan; Moore, Kevin L.

    2012-03-15

    Purpose: In our clinic, physicists spend from 15 to 60 min to verify the physical and dosimetric integrity of radiotherapy plans before presentation to radiation oncology physicians for approval. The purpose of this study was to design and implement a framework to automate as many elements of this quality control (QC) step as possible. Methods: A comprehensive computer application was developed to carry out a majority of these verification tasks in the Philips PINNACLE treatment planning system (TPS). This QC tool functions based on both PINNACLE scripting elements and PERL sub-routines. The core of this technique is the method ofmore » dynamic scripting, which involves a PERL programming module that is flexible and powerful for treatment plan data handling. Run-time plan data are collected, saved into temporary files, and analyzed against standard values and predefined logical rules. The results were summarized in a hypertext markup language (HTML) report that is displayed to the user. Results: This tool has been in clinical use for over a year. The occurrence frequency of technical problems, which would cause delays and suboptimal plans, has been reduced since clinical implementation. Conclusions: In addition to drastically reducing the set of human-driven logical comparisons, this QC tool also accomplished some tasks that are otherwise either quite laborious or impractical for humans to verify, e.g., identifying conflicts amongst IMRT optimization objectives.« less

  14. Longitudinal noninvasive magnetic resonance imaging of brain microhemorrhages in BACE inhibitor-treated APP transgenic mice.

    PubMed

    Beckmann, Nicolau; Doelemeyer, Arno; Zurbruegg, Stefan; Bigot, Karine; Theil, Diethilde; Frieauff, Wilfried; Kolly, Carine; Moulin, Pierre; Neddermann, Daniel; Kreutzer, Robert; Perrot, Ludovic; Brzak, Irena; Jacobson, Laura H; Staufenbiel, Matthias; Neumann, Ulf; Shimshek, Derya R

    2016-09-01

    Currently, several immunotherapies and BACE (Beta Site APP Cleaving Enzyme) inhibitor approaches are being tested in the clinic for the treatment of Alzheimer's disease. A crucial mechanism-related safety concern is the exacerbation of microhemorrhages, which are already present in the majority of Alzheimer patients. To investigate potential safety liabilities of long-term BACE inhibitor therapy, we used aged amyloid precursor protein (APP) transgenic mice (APP23), which robustly develop cerebral amyloid angiopathy. T2*-weighted magnetic resonance imaging (MRI), a translational method applicable in preclinical and clinical studies, was used for the detection of microhemorrhages throughout the entire brain, with subsequent histological validation. Three-dimensional reconstruction based on in vivo MRI and serial Perls' stained sections demonstrated a one-to-one matching of the lesions thus allowing for their histopathological characterization. MRI detected small Perls' positive areas with a high spatial resolution. Our data demonstrate that volumetric assessment by noninvasive MRI is well suited to monitor cerebral microhemorrhages in vivo. Furthermore, 3 months treatment of aged APP23 with the potent BACE-inhibitor NB-360 did not exacerbate microhemorrhages in contrast to Aβ-antibody β1. These results substantiate the safe use of BACE inhibitors regarding microhemorrhages in long-term clinical studies for the treatment of Alzheimer's disease. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. CAPRRESI: Chimera Assembly by Plasmid Recovery and Restriction Enzyme Site Insertion.

    PubMed

    Santillán, Orlando; Ramírez-Romero, Miguel A; Dávila, Guillermo

    2017-06-25

    Here, we present chimera assembly by plasmid recovery and restriction enzyme site insertion (CAPRRESI). CAPRRESI benefits from many strengths of the original plasmid recovery method and introduces restriction enzyme digestion to ease DNA ligation reactions (required for chimera assembly). For this protocol, users clone wildtype genes into the same plasmid (pUC18 or pUC19). After the in silico selection of amino acid sequence regions where chimeras should be assembled, users obtain all the synonym DNA sequences that encode them. Ad hoc Perl scripts enable users to determine all synonym DNA sequences. After this step, another Perl script searches for restriction enzyme sites on all synonym DNA sequences. This in silico analysis is also performed using the ampicillin resistance gene (ampR) found on pUC18/19 plasmids. Users design oligonucleotides inside synonym regions to disrupt wildtype and ampR genes by PCR. After obtaining and purifying complementary DNA fragments, restriction enzyme digestion is accomplished. Chimera assembly is achieved by ligating appropriate complementary DNA fragments. pUC18/19 vectors are selected for CAPRRESI because they offer technical advantages, such as small size (2,686 base pairs), high copy number, advantageous sequencing reaction features, and commercial availability. The usage of restriction enzymes for chimera assembly eliminates the need for DNA polymerases yielding blunt-ended products. CAPRRESI is a fast and low-cost method for fusing protein-coding genes.

  16. Open Scenario Study, Phase II Report: Assessment and Development of Approaches for Satisfying Unclassified Scenario Needs

    DTIC Science & Technology

    2010-01-01

    interface, another providing the application logic (a program used to manipulate the data), and a server running Microsoft SQL Server or Oracle RDBMS... Oracle ) • Mysql (Open Source) • Other What application server software will be needed? • Application Server • CGI PHP/Perl (Open Source...are used throughout DoD and serve a variety of functions. While DoD has a codified and institutionalized process for the development of a common set

  17. Querying and Computing with BioCyc Databases

    PubMed Central

    Krummenacker, Markus; Paley, Suzanne; Mueller, Lukas; Yan, Thomas; Karp, Peter D.

    2006-01-01

    Summary We describe multiple methods for accessing and querying the complex and integrated cellular data in the BioCyc family of databases: access through multiple file formats, access through Application Program Interfaces (APIs) for LISP, Perl and Java, and SQL access through the BioWarehouse relational database. Availability The Pathway Tools software and 20 BioCyc DBs in Tiers 1 and 2 are freely available to academic users; fees apply to some types of commercial use. For download instructions see http://BioCyc.org/download.shtml PMID:15961440

  18. Wrapping up BLAST and other applications for use on Unix clusters.

    PubMed

    Hokamp, Karsten; Shields, Denis C; Wolfe, Kenneth H; Caffrey, Daniel R

    2003-02-12

    We have developed two programs that speed up common bioinformatic applications by spreading them across a UNIX cluster.(1) BLAST.pm, a new module for the 'MOLLUSC' package. (2) WRAPID, a simple tool for parallelizing large numbers of small instances of programs such as BLAST, FASTA and CLUSTALW. The packages were developed in Perl on a 20-node Linux cluster and are provided together with a configuration script and documentation. They can be freely downloaded from http://wolfe.gen.tcd.ie/wrapper.

  19. SNS Proton Beam Window Disposal

    NASA Astrophysics Data System (ADS)

    Popova, Irina; Gallmeier, Franz X.; Trotter, Steven

    2017-09-01

    In order to support the disposal of the proton beam window assembly of the Spallation Neutron Source beamline to the target station, waste classification analyses are performed. The window has a limited life-time due to radiation-induced material damage. Analyses include calculation of the radionuclide inventory and shielding analyses for the transport package/container to ensure that the container is compliant with the transportation and waste management regulations. In order to automate this procedure and minimize manual work a script in Perl language was written.

  20. Tools for automating spacecraft ground systems: The Intelligent Command and Control (ICC) approach

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; Mclean, David

    1996-01-01

    The practical application of scripting languages and World Wide Web tools to the support of spacecraft ground system automation, is reported on. The mission activities and the automation tools used at the Goddard Space Flight Center (MD) are reviewed. The use of the Tool Command Language (TCL) and the Practical Extraction and Report Language (PERL) scripting tools for automating mission operations is discussed together with the application of different tools for the Compton Gamma Ray Observatory ground system.

  1. An Airlift Hub-and-Spoke Location-Routing Model with Time Windows: Case Study of the CONUS-to-Korea Airlift Problem

    DTIC Science & Technology

    1998-03-01

    a point of embarkation to a point of debarkation. This study develops an alternative hub-and-spoke combined location-routing integer linear...programming prototype model, and uses this model to determine what advantages a hub-and-spoke system offers, and in which scenarios it is better-suited than the...extension on the following works: the hierarchical model of Perl and Daskin (1983), time windows features of Chan (1991), combining subtour-breaking and range

  2. Ultraviolet photography of the in vivo human cornea unmasks the Hudson-Stähli line and physiologic vortex patterns.

    PubMed

    Every, Sean G; Leader, John P; Molteno, Anthony C B; Bevin, Tui H; Sanderson, Gordon

    2005-10-01

    To perform ultraviolet (UV) macrophotography of the normal in vivo human cornea, establishing biometric data of the major component of UV absorption for comparison with the Hudson-Stähli (HS) line, the distribution of iron demonstrated by Perl stain, and cases of typical amiodarone keratopathy. Nonrandomized comparative case series of UV photographs of 76 normal corneas (group 1) and 16 corneas with typical amiodarone keratopathy (group 2). Image-analysis software was used to grade the major component of UV absorption for slope and the coordinates of its points of intersection with the vertical corneal meridian and inflection. In group 1 the major component had a mean slope of 5.8 degrees, sloping down from nasal to temporal cornea. The mean coordinates of points of intersection with the vertical corneal meridian and inflection were (0, 0.30) and (0.02, 0.31), respectively. No significant differences between groups 1 and 2 were found for slope (P = 0.155), intersection with the vertical corneal meridian (P = 0.517), and point of inflection (P = 0.344). The major component of UV absorption was consistent with published characteristics of the HS line, and coincidence of UV absorption and Perl-stained iron was demonstrated in one corneal button. A vortex pattern of UV absorption was observed in all corneas. UV photography demonstrates subclinical corneal iron, confirming its deposition in an integrated HS line/vortex pattern. Coincident iron and amiodarone deposition occurs in amiodarone keratopathy.

  3. Development and Implementation of Dynamic Scripts to Execute Cycled WRF/GSI Forecasts

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Quanli; Watson, Leela

    2014-01-01

    Automating the coupling of data assimilation (DA) and modeling systems is a unique challenge in the numerical weather prediction (NWP) research community. In recent years, the Development Testbed Center (DTC) has released well-documented tools such as the Weather Research and Forecasting (WRF) model and the Gridpoint Statistical Interpolation (GSI) DA system that can be easily downloaded, installed, and run by researchers on their local systems. However, developing a coupled system in which the various preprocessing, DA, model, and postprocessing capabilities are all integrated can be labor-intensive if one has little experience with any of these individual systems. Additionally, operational modeling entities generally have specific coupling methodologies that can take time to understand and develop code to implement properly. To better enable collaborating researchers to perform modeling and DA experiments with GSI, the Short-term Prediction Research and Transition (SPoRT) Center has developed a set of Perl scripts that couple GSI and WRF in a cycling methodology consistent with the use of real-time, regional observation data from the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC). Because Perl is open source, the code can be easily downloaded and executed regardless of the user's native shell environment. This paper will provide a description of this open-source code and descriptions of a number of the use cases that have been performed by SPoRT collaborators using the scripts on different computing systems.

  4. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform.

    PubMed

    Zheng, Wenning; Mutha, Naresh V R; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah; Choo, Siew Woh

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my.

  5. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform

    PubMed Central

    Zheng, Wenning; Mutha, Naresh V.R.; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S.; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my. PMID:27017950

  6. Automation Framework for Flight Dynamics Products Generation

    NASA Technical Reports Server (NTRS)

    Wiegand, Robert E.; Esposito, Timothy C.; Watson, John S.; Jun, Linda; Shoan, Wendy; Matusow, Carla

    2010-01-01

    XFDS provides an easily adaptable automation platform. To date it has been used to support flight dynamics operations. It coordinates the execution of other applications such as Satellite TookKit, FreeFlyer, MATLAB, and Perl code. It provides a mechanism for passing messages among a collection of XFDS processes, and allows sending and receiving of GMSEC messages. A unified and consistent graphical user interface (GUI) is used for the various tools. Its automation configuration is stored in text files, and can be edited either directly or using the GUI.

  7. Ion tracking in photocathode rf guns

    NASA Astrophysics Data System (ADS)

    Lewellen, John W.

    2002-02-01

    Projected next-generation linac-based light sources, such as PERL or the TESLA free-electron laser, generally assume, as essential components of their injector complexes, long-pulse photocathode rf electron guns. These guns, due to their design rf pulse durations of many milliseconds to continuous wave, may be more susceptible to ion bombardment damage of their cathodes than conventional rf guns, which typically use rf pulses of microsecond duration. This paper explores this possibility in terms of ion propagation within the gun, and presents a basis for future study of the subject.

  8. Web-based data acquisition and management system for GOSAT validation Lidar data analysis

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra N.; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei

    2012-11-01

    An web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data analysis is developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS ground-level meteorological data, Rawinsonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data.

  9. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  10. mpiGraph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Adam

    2007-05-22

    MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.

  11. Pulse combustion engineering research laboratory for indirect heating applications (PERL-IH). Final report, October 1, 1989-June 30, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belles, F.E.

    1993-01-01

    Uncontrolled NOx emissions from a variety of pulse combustors were measured. The implementation of flue-gas recirculation to reduce NOx was studied. A flexible workstation for parametric testing was built and used to study the phasing between pressure and heat release, and effects of fuel/air mixing on performance. Exhaust-pipe heat transfer was analyzed. An acoustic model of pulse combustion was developed. Technical support was provided to manufacturers on noise, ignition and condensation. A computerized bibliographic database on pulse combustion was created.

  12. A Web-based database for pathology faculty effort reporting.

    PubMed

    Dee, Fred R; Haugen, Thomas H; Wynn, Philip A; Leaven, Timothy C; Kemp, John D; Cohen, Michael B

    2008-04-01

    To ensure appropriate mission-based budgeting and equitable distribution of funds for faculty salaries, our compensation committee developed a pathology-specific effort reporting database. Principles included the following: (1) measurement should be done by web-based databases; (2) most entry should be done by departmental administration or be relational to other databases; (3) data entry categories should be aligned with funding streams; and (4) units of effort should be equal across categories of effort (service, teaching, research). MySQL was used for all data transactions (http://dev.mysql.com/downloads), and scripts were constructed using PERL (http://www.perl.org). Data are accessed with forms that correspond to fields in the database. The committee's work resulted in a novel database using pathology value units (PVUs) as a standard quantitative measure of effort for activities in an academic pathology department. The most common calculation was to estimate the number of hours required for a specific task, divide by 2080 hours (a Medicare year) and then multiply by 100. Other methods included assigning a baseline PVU for program, laboratory, or course directorship with an increment for each student or staff in that unit. With these methods, a faculty member should acquire approximately 100 PVUs. Some outcomes include (1) plotting PVUs versus salary to identify outliers for salary correction, (2) quantifying effort in activities outside the department, (3) documenting salary expenditure for unfunded research, (4) evaluating salary equity by plotting PVUs versus salary by sex, and (5) aggregating data by category of effort for mission-based budgeting and long-term planning.

  13. The WORM site: worm.csirc.net

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, T.

    2000-07-01

    The Write One, Run Many (WORM) site (worm.csirc.net) is the on-line home of the WORM language and is hosted by the Criticality Safety Information Resource Center (CSIRC) (www.csirc.net). The purpose of this web site is to create an on-line community for WORM users to gather, share, and archive WORM-related information. WORM is an embedded, functional, programming language designed to facilitate the creation of input decks for computer codes that take standard ASCII text files as input. A functional programming language is one that emphasizes the evaluation of expressions, rather than execution of commands. The simplest and perhaps most common examplemore » of a functional language is a spreadsheet such as Microsoft Excel. The spreadsheet user specifies expressions to be evaluated, while the spreadsheet itself determines the commands to execute, as well as the order of execution/evaluation. WORM functions in a similar fashion and, as a result, is very simple to use and easy to learn. WORM improves the efficiency of today's criticality safety analyst by allowing: (1) input decks for parameter studies to be created quickly and easily; (2) calculations and variables to be embedded into any input deck, thus allowing for meaningful parameter specifications; (3) problems to be specified using any combination of units; and (4) complex mathematically defined models to be created. WORM is completely written in Perl. Running on all variants of UNIX, Windows, MS-DOS, MacOS, and many other operating systems, Perl is one of the most portable programming languages available. As such, WORM works on practically any computer platform.« less

  14. CD1 Mouse Retina Is Shielded From Iron Overload Caused by a High Iron Diet

    PubMed Central

    Bhoiwala, Devang L.; Song, Ying; Cwanger, Alyssa; Clark, Esther; Zhao, Liang-liang; Wang, Chenguang; Li, Yafeng; Song, Delu; Dunaief, Joshua L.

    2015-01-01

    Purpose High RPE iron levels have been associated with age-related macular degeneration. Mutation of the ferroxidase ceruloplasmin leads to RPE iron accumulation and degeneration in patients with aceruloplasminemia; mice lacking ceruloplasmin and its homolog hephaestin have a similar RPE degeneration. To determine whether a high iron diet (HID) could cause RPE iron accumulation, possibly contributing to RPE oxidative stress in AMD, we tested the effect of dietary iron on mouse RPE iron. Methods Male CD1 strain mice were fed either a standard iron diet (SID) or the same diet with extra iron added (HID) for either 3 months or 10 months. Mice were analyzed with immunofluorescence and Perls' histochemical iron stain to assess iron levels. Levels of ferritin, transferrin receptor, and oxidative stress gene mRNAs were measured by quantitative PCR (qPCR) in neural retina (NR) and isolated RPE. Morphology was assessed in plastic sections. Results Ferritin immunoreactivity demonstrated a modest increase in the RPE in 10-month HID mice. Analysis by qPCR showed changes in mRNA levels of iron-responsive genes, indicating moderately increased iron in the RPE of 10-month HID mice. However, even by age 18 months, there was no Perls' signal in the retina or RPE and no retinal degeneration. Conclusions These findings indicate that iron absorbed from the diet can modestly increase the level of iron deposition in the wild-type mouse RPE without causing RPE or retinal degeneration. This suggests regulation of retinal iron uptake at the blood-retinal barriers. PMID:26275132

  15. AST: World Coordinate Systems in Astronomy

    NASA Astrophysics Data System (ADS)

    Berry, David S.; Warren-Smith, Rodney F.

    2014-04-01

    The AST library provides a comprehensive range of facilities for attaching world coordinate systems to astronomical data, for retrieving and interpreting that information in a variety of formats, including FITS-WCS, and for generating graphical output based on it. Core projection algorithms are provided by WCSLIB (ascl:1108.003) and astrometry is provided by the PAL (ascl:1606.002) and SOFA (ascl:1403.026) libraries. AST bindings are available in Python (pyast), Java (JNIAST) and Perl (Starlink::AST). AST is used as the plotting and astrometry library in DS9 and GAIA, and is distributed separately and as part of the Starlink software collection.

  16. GPC: General Polygon Clipper library

    NASA Astrophysics Data System (ADS)

    Murta, Alan

    2015-12-01

    The University of Manchester GPC library is a flexible and highly robust polygon set operations library for use with C, C#, Delphi, Java, Perl, Python, Haskell, Lua, VB.Net and other applications. It supports difference, intersection, exclusive-or and union clip operations, and polygons may be comprised of multiple disjoint contours. Contour vertices may be given in any order - clockwise or anticlockwise, and contours may be convex, concave or self-intersecting, and may be nested (i.e. polygons may have holes). Output may take the form of either polygon contours or tristrips, and hole and external contours are differentiated in the result.

  17. TRAP: automated classification, quantification and annotation of tandemly repeated sequences.

    PubMed

    Sobreira, Tiago José P; Durham, Alan M; Gruber, Arthur

    2006-02-01

    TRAP, the Tandem Repeats Analysis Program, is a Perl program that provides a unified set of analyses for the selection, classification, quantification and automated annotation of tandemly repeated sequences. TRAP uses the results of the Tandem Repeats Finder program to perform a global analysis of the satellite content of DNA sequences, permitting researchers to easily assess the tandem repeat content for both individual sequences and whole genomes. The results can be generated in convenient formats such as HTML and comma-separated values. TRAP can also be used to automatically generate annotation data in the format of feature table and GFF files.

  18. Current and Future Perspectives of Aerosol Research at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Matsui, Toshihisa; Ichoku, Charles; Randles, Cynthia; Yuan, Tianle; Da Silva, Arlindo M.; Colarco, Peter R.; Kim, Dongchul; Levy, Robert; Sayer, Andrew; Chin, Mian; hide

    2014-01-01

    Aerosols are tiny atmospheric particles that are emitted from various natural and anthropogenic sources. They affect climate through direct and indirect interactions with solar and thermal radiation, clouds, and atmospheric circulation (Solomon et al. 2007). The launch of a variety of sophisticated satellite-based observing systems aboard the Terra, Aqua, Aura, SeaWiFS (see appendix for all acronym expansions), CALIPSO, and other satellites in the late 1990s to mid-2000s through the NASA EOS and other U.S. and non-U.S. programs ushered in a golden era in aerosol research. NASA has been a leader in providing global aerosol characterizations through observations from satellites, ground networks, and field campaigns, as well as from global and regional modeling. AeroCenter (http://aerocenter.gsfc.nasa.gov/), which was formed in 2002 to address the many facets of aerosol research in a collaborative manner, is an interdisciplinary union of researchers (200 members) at NASA GSFC and other nearby institutions, including NOAA, several universities, and research laboratories. AeroCenter hosts a web-accessible regular seminar series and an annual meeting to present up-to-date aerosol research, including measurement techniques; remote sensing algorithms; modeling development; field campaigns; and aerosol interactions with radiation, clouds, precipitation, climate, biosphere, atmospheric chemistry, air quality, and human health. The 2013 annual meeting was held at the NASA GSFC Visitor Center on 31 May 2013, which coincided with the seventh anniversary of the passing of Yoram Kaufman, a modern pioneer in satellite-based aerosol science and the founder of AeroCenter. The central theme of this year's meeting was "current and future perspectives" of NASA's aerosol science and satellite missions.

  19. “PERLE bedside-examination-course for candidates in state examination” – Developing a training program for the third part of medical state examination (oral examination with practical skills)

    PubMed Central

    Karthaus, Anne; Schmidt, Anita

    2016-01-01

    Introduction: In preparation for the state examination, many students have open questions and a need for advice. Tutors of the Skills Lab PERLE-„Praxis ERfahren und Lernen“ (experiencing and learning practical skills) have developed a new course concept to provide support and practical assistance for the examinees. Objectives: The course aims to familiarize the students with the exam situation in order to gain more confidence. This enables the students to experience a confrontation with the specific situation of the exam in a protected environment. Furthermore, soft skills are utilized and trained. Concept of the course: The course was inspired by the OSCE-model (Objective Structured Clinical Examination), an example for case-based learning and controlling. Acquired knowledge can be revised and extended through the case studies. Experienced tutors provide assistance in discipline-specific competencies, and help in organizational issues such as dress code and behaviour. Evaluation of the course: An evaluation was conducted by the attending participants after every course. Based on this assessment, the course is constantly being developed. In March, April and October 2015 six courses, with a total of 84 participants, took place. Overall 76 completed questionnaires (91%) were analysed. Discussion: Strengths of the course are a good tutor-participants-ratio with 1:4 (1 Tutor provides guidance for 4 participants), the interactivity of the course, and the high flexibility in responding to the group's needs. Weaknesses are the tight schedule, and the currently not yet performed evaluation before and after the course. Conclusion: In terms of “best practise”, this article shows an example of how to offer low-cost and low-threshold preparation for the state examination. PMID:27579355

  20. "PERLE bedside-examination-course for candidates in state examination" - Developing a training program for the third part of medical state examination (oral examination with practical skills).

    PubMed

    Karthaus, Anne; Schmidt, Anita

    2016-01-01

    In preparation for the state examination, many students have open questions and a need for advice. Tutors of the Skills Lab PERLE-"Praxis ERfahren und Lernen" (experiencing and learning practical skills) have developed a new course concept to provide support and practical assistance for the examinees. The course aims to familiarize the students with the exam situation in order to gain more confidence. This enables the students to experience a confrontation with the specific situation of the exam in a protected environment. Furthermore, soft skills are utilized and trained. Concept of the course: The course was inspired by the OSCE-model (Objective Structured Clinical Examination), an example for case-based learning and controlling. Acquired knowledge can be revised and extended through the case studies. Experienced tutors provide assistance in discipline-specific competencies, and help in organizational issues such as dress code and behaviour. Evaluation of the course: An evaluation was conducted by the attending participants after every course. Based on this assessment, the course is constantly being developed. In March, April and October 2015 six courses, with a total of 84 participants, took place. Overall 76 completed questionnaires (91%) were analysed. Strengths of the course are a good tutor-participants-ratio with 1:4 (1 Tutor provides guidance for 4 participants), the interactivity of the course, and the high flexibility in responding to the group's needs. Weaknesses are the tight schedule, and the currently not yet performed evaluation before and after the course. In terms of "best practise", this article shows an example of how to offer low-cost and low-threshold preparation for the state examination.

  1. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  2. Informatics technology mimics ecology: dense, mutualistic collaboration networks are associated with higher publication rates.

    PubMed

    Sorani, Marco D

    2012-01-01

    Information technology (IT) adoption enables biomedical research. Publications are an accepted measure of research output, and network models can describe the collaborative nature of publication. In particular, ecological networks can serve as analogies for publication and technology adoption. We constructed network models of adoption of bioinformatics programming languages and health IT (HIT) from the literature.We selected seven programming languages and four types of HIT. We performed PubMed searches to identify publications since 2001. We calculated summary statistics and analyzed spatiotemporal relationships. Then, we assessed ecological models of specialization, cooperativity, competition, evolution, biodiversity, and stability associated with publications.Adoption of HIT has been variable, while scripting languages have experienced rapid adoption. Hospital systems had the largest HIT research corpus, while Perl had the largest language corpus. Scripting languages represented the largest connected network components. The relationship between edges and nodes was linear, though Bioconductor had more edges than expected and Perl had fewer. Spatiotemporal relationships were weak. Most languages shared a bioinformatics specialization and appeared mutualistic or competitive. HIT specializations varied. Specialization was highest for Bioconductor and radiology systems. Specialization and cooperativity were positively correlated among languages but negatively correlated among HIT. Rates of language evolution were similar. Biodiversity among languages grew in the first half of the decade and stabilized, while diversity among HIT was variable but flat. Compared with publications in 2001, correlation with publications one year later was positive while correlation after ten years was weak and negative.Adoption of new technologies can be unpredictable. Spatiotemporal relationships facilitate adoption but are not sufficient. As with ecosystems, dense, mutualistic, specialized co-habitation is associated with faster growth. There are rapidly changing trends in external technological and macroeconomic influences. We propose that a better understanding of how technologies are adopted can facilitate their development.

  3. Investigating Plasma Motion of Magnetic Clouds at 1 AU through a Velocity-modified Cylindrical Force-free Flux Rope Model

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Shen, C.; Liu, R.; Zhou, Z.

    2014-12-01

    Magnetic clouds (MCs) are the interplanetary counterparts of coronal mass ejections (CMEs). Due to the very low value of Can't connect to bucket.int.confex.com:4201 (Connection refused) LWP::Protocol::http::Socket: connect: Connection refused at /usr/local/lib/perl5/site_perl/5.8.8/LWP/Protocol/http.pm line 51. in MCs, they are believed to be in a nearly force-free state and therefore are able to be modeled by a cylindrical force-free flux rope. However, the force-free state only describes the magnetic field topology but not the plasma motion of a MC. For a MC propagating in interplanetary space, the global plasma motion has three possible components: linear propagating motion of a MC away from the Sun, expanding motion and circular motion with respect to the axis of the MC. By assuming the quasi-steady evolution and self-similar expansion, we introduced the three-component motion into the cylindrical force-free flux rope model, and developed a velocity-modified model. Then we applied the model to 73 MCs observed by Wind spacecraft to investigate the properties of the plasma motion of MCs. It is found that (1) some MCs did not propagate along the Sun-Earth line, suggesting the direct evidence of the CME's deflected propagation and/or rotation in interplanetary space, (2) the expansion speed is correlated with the radial propagation speed and 62%/17% of MCs underwent a under/over-expansion at 1 AU, and (3) the circular motion does exists though it is only on the order of 10 km s-1. These findings advance our understanding of the MC's properties at 1 AU as well as the dynamic evolution of CMEs from the Sun to interplanetary space.

  4. Automated Job Controller for Clouds and the Earth's Radiant Energy System (CERES) Production Processing

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Hillyer, T. N.

    2011-12-01

    Clouds and the Earth's Radiant Energy System (CERES) is one of NASA's highest priority Earth Observing System (EOS) scientific instruments. The CERES science team will integrate data from the CERES Flight Model 5 (FM5) on the NPOESS Preparatory Project (NPP) in addition to the four CERES scanning instrument on Terra and Aqua. The CERES production system consists of over 75 Product Generation Executives (PGEs) maintained by twelve subsystem groups. The processing chain fuses CERES instrument observations with data from 19 other unique sources. The addition of FM5 to over 22 instrument years of data to be reprocessed from flight models 1-4 creates a need for an optimized production processing approach. This poster discusses a new approach, using JBoss and Perl to manage job scheduling and interdependencies between PGEs and external data sources. The new optimized approach uses JBoss to serve handler servlets which regulate PGE-level job interdependencies and job completion notifications. Additional servlets are used to regulate all job submissions from the handlers and to interact with the operator. Perl submission scripts are used to build Process Control Files and to interact directly with the operating system and cluster scheduler. The result is a reduced burden on the operator by algorithmically enforcing a set of rules that determine the optimal time to produce data products with the highest integrity. These rules are designed on a per PGE basis and periodically change. This design provides the means to dynamically update PGE rules at run time and increases the processing throughput by using an event driven controller. The immediate notification of a PGE's completion (an event) allows successor PGEs to launch at the proper time with minimal start up latency, thereby increasing computer system utilization.

  5. EUCANEXT: an integrated database for the exploration of genomic and transcriptomic data from Eucalyptus species

    PubMed Central

    Nascimento, Leandro Costa; Salazar, Marcela Mendes; Lepikson-Neto, Jorge; Camargo, Eduardo Leal Oliveira; Parreiras, Lucas Salera; Carazzolle, Marcelo Falsarella

    2017-01-01

    Abstract Tree species of the genus Eucalyptus are the most valuable and widely planted hardwoods in the world. Given the economic importance of Eucalyptus trees, much effort has been made towards the generation of specimens with superior forestry properties that can deliver high-quality feedstocks, customized to the industrýs needs for both cellulosic (paper) and lignocellulosic biomass production. In line with these efforts, large sets of molecular data have been generated by several scientific groups, providing invaluable information that can be applied in the development of improved specimens. In order to fully explore the potential of available datasets, the development of a public database that provides integrated access to genomic and transcriptomic data from Eucalyptus is needed. EUCANEXT is a database that analyses and integrates publicly available Eucalyptus molecular data, such as the E. grandis genome assembly and predicted genes, ESTs from several species and digital gene expression from 26 RNA-Seq libraries. The database has been implemented in a Fedora Linux machine running MySQL and Apache, while Perl CGI was used for the web interfaces. EUCANEXT provides a user-friendly web interface for easy access and analysis of publicly available molecular data from Eucalyptus species. This integrated database allows for complex searches by gene name, keyword or sequence similarity and is publicly accessible at http://www.lge.ibi.unicamp.br/eucalyptusdb. Through EUCANEXT, users can perform complex analysis to identify genes related traits of interest using RNA-Seq libraries and tools for differential expression analysis. Moreover, all the bioinformatics pipeline here described, including the database schema and PERL scripts, are readily available and can be applied to any genomic and transcriptomic project, regardless of the organism. Database URL: http://www.lge.ibi.unicamp.br/eucalyptusdb PMID:29220468

  6. Passive stiffness of monoarticular lower leg muscles is influenced by knee joint angle.

    PubMed

    Ateş, Filiz; Andrade, Ricardo J; Freitas, Sandro R; Hug, François; Lacourpaille, Lilian; Gross, Raphael; Yucesoy, Can A; Nordez, Antoine

    2018-03-01

    While several studies demonstrated the occurrence of intermuscular mechanical interactions, the physiological significance of these interactions remains a matter of debate. The purpose of this study was to quantify the localized changes in the shear modulus of the gastrocnemius lateralis (GL), monoarticular dorsi- and plantar-flexor muscles induced by a change in knee angle. Participants underwent slow passive ankle rotations at the following two knee positions: knee flexed at 90° and knee fully extended. Ultrasound shear wave elastography was used to assess the muscle shear modulus of the GL, soleus [both proximally (SOL-proximal) and distally (SOL distal)], peroneus longus (PERL), and tibialis anterior (TA). This was performed during two experimental sessions (experiment I: n = 11; experiment II: n = 10). The shear modulus of each muscle was compared between the two knee positions. The shear modulus was significantly higher when the knee was fully extended than when the knee was flexed (P < 0.001) for the GL (averaged increase on the whole range of motion: + 5.8 ± 1.3 kPa), SOL distal (+ 4.5 ± 1.5 kPa), PERL (+ 1.1 ± 0.7 kPa), and TA (+ 1.6 ± 1.0 kPa). In contrast, a lower SOL-proximal shear modulus (P < 0.001, - 5.9 ± 1.0 kPa) was observed. As the muscle shear modulus is linearly related to passive muscle force, these results provide evidence of a non-negligible intermuscular mechanical interaction between the human lower leg muscles during passive ankle rotations. The role of these interactions in the production of coordinated movements requires further investigation.

  7. How transformational learning promotes caring, consultation and creativity, and ultimately contributes to sustainable development: Lessons from the Partnership for Education and Research about Responsible Living (PERL) network

    NASA Astrophysics Data System (ADS)

    Thoresen, Victoria Wyszynski

    2017-12-01

    Oases of learning which are transformative and lead to significant behavioural change can be found around the globe. Transformational learning has helped learners not only to understand what they have been taught but also to re-conceptualise and re-apply this understanding to their daily lives. Unfortunately, as many global reports indicate, inspirational transformational learning approaches for sustainable development are rare and have yet to become the norm - despite calls for such approaches by several outstanding educators and organisations. This article examines three learning approaches developed by the network of the Partnership for Education and Research about Responsible Living (PERL). These approaches are structured around core elements of transformative learning for sustainable development, yet focus particularly on the ability to care, consult with others and be creative. They seem to depend on the learners' ability to articulate their perceptions of sustainable development in relation to their own values and to identify how these are actualised in their daily life. Together with other core elements of transformative learning, an almost magical (not precisely measurable) synergy then emerges. The intensity of this synergy appears to be directly related to the individual learner's understanding of the contradictions, interlinkages and interdependencies of modern society. The impact of this synergy seems to be concurrent with the extent to which the learner engages in a continual learning process with those with whom he/she has contact. The findings of this study suggest that mainstreaming transformational learning for sustainable development in ways that release the "magic synergy of creative caring" can result in the emergence of individuals who are willing and able to move from "business as usual" towards more socially just, economically equitable, and environmentally sensitive behaviour.

  8. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  9. Biotool2Web: creating simple Web interfaces for bioinformatics applications.

    PubMed

    Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg

    2006-01-01

    Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).

  10. Detecting distant homologies on protozoans metabolic pathways using scientific workflows.

    PubMed

    da Cruz, Sérgio Manuel Serra; Batista, Vanessa; Silva, Edno; Tosta, Frederico; Vilela, Clarissa; Cuadrat, Rafael; Tschoeke, Diogo; Dávila, Alberto M R; Campos, Maria Luiza Machado; Mattoso, Marta

    2010-01-01

    Bioinformatics experiments are typically composed of programs in pipelines manipulating an enormous quantity of data. An interesting approach for managing those experiments is through workflow management systems (WfMS). In this work we discuss WfMS features to support genome homology workflows and present some relevant issues for typical genomic experiments. Our evaluation used Kepler WfMS to manage a real genomic pipeline, named OrthoSearch, originally defined as a Perl script. We show a case study detecting distant homologies on trypanomatids metabolic pathways. Our results reinforce the benefits of WfMS over script languages and point out challenges to WfMS in distributed environments.

  11. Accessing SDO data in a pipeline environment using the VSO WSDL/SOAP interface

    NASA Astrophysics Data System (ADS)

    Suarez Sola, F. I.; Hourcle, J. A.; Amezcua, A.; Bogart, R.; Davey, A. R.; Gurman, J. B.; Hill, F.; Hughitt, V. K.; Martens, P. C.; Spencer, J.; Vso Team

    2010-12-01

    As part of the Virtual Solar Observatory (VSO) effort to support the Solar Dynamics Observatory (SDO) data, the VSO has worked on bringing up to date its WSDL document and SOAP interface to make it compatible with most widely used web services core engines. (E.g. axis2, jws, etc.) In this presentation we will explore the possibilities available for searching and/or fetching data within pipeline code. We will explain some of the WSDL/VSO-SDO interface intricacies and show how the vast amount of data that is available via the VSO can be tapped via IDL, Java, Perl or C in an uncomplicated way.

  12. Comparative XAFS studies of some Cobalt complexes of (3-N- phenyl -thiourea-pentanone-2)

    NASA Astrophysics Data System (ADS)

    soni, Namrata; Parsai, Neetu; Mishra, Ashutosh

    2016-10-01

    XAFS spectroscopy is a useful method for determining the local structure around a specific atom in disordered systems. XAFS study of some cobalt complexes of (3-N-phenyle- thiourea-pentanon-2) is carried out using the latest XAFS analysis software Demeter with Strawberry Perl. The same study is also carried out theoretically using Mathcad software. It is found that the thiourea has significant influence in the spectra and the results obtained experimentally and theoretically are in agreement. Fourier transform of the experimental and theoretically generated XAFS have been taken to obtain first shell radial distance. The values so obtained are in agreement with each other.

  13. The variant call format and VCFtools.

    PubMed

    Danecek, Petr; Auton, Adam; Abecasis, Goncalo; Albers, Cornelis A; Banks, Eric; DePristo, Mark A; Handsaker, Robert E; Lunter, Gerton; Marth, Gabor T; Sherry, Stephen T; McVean, Gilean; Durbin, Richard

    2011-08-01

    The variant call format (VCF) is a generic format for storing DNA polymorphism data such as SNPs, insertions, deletions and structural variants, together with rich annotations. VCF is usually stored in a compressed manner and can be indexed for fast data retrieval of variants from a range of positions on the reference genome. The format was developed for the 1000 Genomes Project, and has also been adopted by other projects such as UK10K, dbSNP and the NHLBI Exome Project. VCFtools is a software suite that implements various utilities for processing VCF files, including validation, merging, comparing and also provides a general Perl API. http://vcftools.sourceforge.net

  14. NOAO observing proposal processing system

    NASA Astrophysics Data System (ADS)

    Bell, David J.; Gasson, David; Hartman, Mia

    2002-12-01

    Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online. The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.

  15. Finding, Browsing and Getting Data Easily Using SPDF Web Services

    NASA Technical Reports Server (NTRS)

    Candey, R.; Chimiak, R.; Harris, B.; Johnson, R.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; McGuire, R.; Papitashvili, N.; hide

    2010-01-01

    The NASA GSFC Space Physics Data Facility (5PDF) provides heliophysics science-enabling information services for enhancing scientific research and enabling integration of these services into the Heliophysics Data Environment paradigm, via standards-based approach (SOAP) and Representational State Transfer (REST) web services in addition to web browser, FTP, and OPeNDAP interfaces. We describe these interfaces and the philosophies behind these web services, and show how to call them from various languages, such as IDL and Perl. We are working towards a "one simple line to call" philosophy extolled in the recent VxO discussions. Combining data from many instruments and missions enables broad research analysis and correlation and coordination with other experiments and missions.

  16. Charming Users into Scripting CIAO with Python

    NASA Astrophysics Data System (ADS)

    Burke, D. J.

    2011-07-01

    The Science Data Systems group of the Chandra X-ray Center provides a number of scripts and Python modules that extend the capabilities of CIAO. Experience in converting the existing scripts—written in a variety of languages such as bash, csh/tcsh, Perl and S-Lang—to Python, and conversations with users, led to the development of the ciao_contrib.runtool module. This allows users to easily run CIAO tools from Python scripts, and utilizes the metadata provided by the parameter-file system to create an API that provides the flexibility and safety guarantees of the command-line. The module is provided to the user community and is being used within our group to create new scripts.

  17. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).

  18. ConoDictor: a tool for prediction of conopeptide superfamilies.

    PubMed

    Koua, Dominique; Brauer, Age; Laht, Silja; Kaplinski, Lauris; Favreau, Philippe; Remm, Maido; Lisacek, Frédérique; Stöcklin, Reto

    2012-07-01

    ConoDictor is a tool that enables fast and accurate classification of conopeptides into superfamilies based on their amino acid sequence. ConoDictor combines predictions from two complementary approaches-profile hidden Markov models and generalized profiles. Results appear in a browser as tables that can be downloaded in various formats. This application is particularly valuable in view of the exponentially increasing number of conopeptides that are being identified. ConoDictor was written in Perl using the common gateway interface module with a php submission page. Sequence matching is performed with hmmsearch from HMMER 3 and ps_scan.pl from the pftools 2.3 package. ConoDictor is freely accessible at http://conco.ebc.ee.

  19. [Two cases of skin pigmentation in association with minocycline therapy (author's transl)].

    PubMed

    Leroy, J P; Dorval, J C; Dewitte, J D; Guillerm, D; Volant, A; Masse, R

    1981-01-01

    Report of two cases of skin pigmentation during minocycline therapy. Examination showed confluent blue-gray oval patches on the anterior part of the legs, occurring after ingestion of respectively 12 g and 100 g of minocycline. Microscopic examination of each case was identical and showed two lesions: increase in the amount of melanine deposition in the basal layer of the epidermis; presence of brown-black pigment at all the level of the dermis but specially near the sweet glands. This pigment was strongly positive with Perls' stain. Electron microscopic examination showed a finely granular pigment exclusively intracellular in dermis fibroblast and macrophage. This pigment seemed to contain mainly hemodiderine.

  20. Study of XAFS of some Fe compounds and determination of first shell radial distance

    NASA Astrophysics Data System (ADS)

    Parsai, Neetu; Mishra, Ashutosh

    2017-05-01

    X-ray absorption fine structure (XAFS) of some Fe compounds have been studied using the latest XAFS analysis software Demeter with Strawberry Perl. The processed XAFS data of the Fe compounds have been taken from available model compound library. The XAFS data have been processed to plot the µ(E) verses E spectra. These spectra have been converted into K-space, R-space and q-space. R-space spectra have been used to obtain first shell radial distance in Fe compounds. Structural parameters like first shell radial distance is useful in determination of bond length in Fe compounds. Hence the study play important role in biological applications.

  1. EST databases and web tools for EST projects.

    PubMed

    Shen, Yao-Qing; O'Brien, Emmet; Koski, Liisa; Lang, B Franz; Burger, Gertraud

    2009-01-01

    This chapter outlines key considerations for constructing and implementing an EST database. Instead of showing the technological details step by step, emphasis is put on the design of an EST database suited to the specific needs of EST projects and how to choose the most suitable tools. Using TBestDB as an example, we illustrate the essential factors to be considered for database construction and the steps for data population and annotation. This process employs technologies such as PostgreSQL, Perl, and PHP to build the database and interface, and tools such as AutoFACT for data processing and annotation. We discuss these in comparison to other available technologies and tools, and explain the reasons for our choices.

  2. snpTree--a web-server to identify and construct SNP trees from whole genome sequence data.

    PubMed

    Leekitcharoenphon, Pimlapas; Kaas, Rolf S; Thomsen, Martin Christen Frølund; Friis, Carsten; Rasmussen, Simon; Aarestrup, Frank M

    2012-01-01

    The advances and decreasing economical cost of whole genome sequencing (WGS), will soon make this technology available for routine infectious disease epidemiology. In epidemiological studies, outbreak isolates have very little diversity and require extensive genomic analysis to differentiate and classify isolates. One of the successfully and broadly used methods is analysis of single nucletide polymorphisms (SNPs). Currently, there are different tools and methods to identify SNPs including various options and cut-off values. Furthermore, all current methods require bioinformatic skills. Thus, we lack a standard and simple automatic tool to determine SNPs and construct phylogenetic tree from WGS data. Here we introduce snpTree, a server for online-automatic SNPs analysis. This tool is composed of different SNPs analysis suites, perl and python scripts. snpTree can identify SNPs and construct phylogenetic trees from WGS as well as from assembled genomes or contigs. WGS data in fastq format are aligned to reference genomes by BWA while contigs in fasta format are processed by Nucmer. SNPs are concatenated based on position on reference genome and a tree is constructed from concatenated SNPs using FastTree and a perl script. The online server was implemented by HTML, Java and python script.The server was evaluated using four published bacterial WGS data sets (V. cholerae, S. aureus CC398, S. Typhimurium and M. tuberculosis). The evaluation results for the first three cases was consistent and concordant for both raw reads and assembled genomes. In the latter case the original publication involved extensive filtering of SNPs, which could not be repeated using snpTree. The snpTree server is an easy to use option for rapid standardised and automatic SNP analysis in epidemiological studies also for users with limited bioinformatic experience. The web server is freely accessible at http://www.cbs.dtu.dk/services/snpTree-1.0/.

  3. Informatics Technology Mimics Ecology: Dense, Mutualistic Collaboration Networks Are Associated with Higher Publication Rates

    PubMed Central

    Sorani, Marco D.

    2012-01-01

    Information technology (IT) adoption enables biomedical research. Publications are an accepted measure of research output, and network models can describe the collaborative nature of publication. In particular, ecological networks can serve as analogies for publication and technology adoption. We constructed network models of adoption of bioinformatics programming languages and health IT (HIT) from the literature. We selected seven programming languages and four types of HIT. We performed PubMed searches to identify publications since 2001. We calculated summary statistics and analyzed spatiotemporal relationships. Then, we assessed ecological models of specialization, cooperativity, competition, evolution, biodiversity, and stability associated with publications. Adoption of HIT has been variable, while scripting languages have experienced rapid adoption. Hospital systems had the largest HIT research corpus, while Perl had the largest language corpus. Scripting languages represented the largest connected network components. The relationship between edges and nodes was linear, though Bioconductor had more edges than expected and Perl had fewer. Spatiotemporal relationships were weak. Most languages shared a bioinformatics specialization and appeared mutualistic or competitive. HIT specializations varied. Specialization was highest for Bioconductor and radiology systems. Specialization and cooperativity were positively correlated among languages but negatively correlated among HIT. Rates of language evolution were similar. Biodiversity among languages grew in the first half of the decade and stabilized, while diversity among HIT was variable but flat. Compared with publications in 2001, correlation with publications one year later was positive while correlation after ten years was weak and negative. Adoption of new technologies can be unpredictable. Spatiotemporal relationships facilitate adoption but are not sufficient. As with ecosystems, dense, mutualistic, specialized co-habitation is associated with faster growth. There are rapidly changing trends in external technological and macroeconomic influences. We propose that a better understanding of how technologies are adopted can facilitate their development. PMID:22279593

  4. CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.

    PubMed

    Hallin, Peter F; Ussery, David W

    2004-12-12

    Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.

  5. OOSTethys - Open Source Software for the Global Earth Observing Systems of Systems

    NASA Astrophysics Data System (ADS)

    Bridger, E.; Bermudez, L. E.; Maskey, M.; Rueda, C.; Babin, B. L.; Blair, R.

    2009-12-01

    An open source software project is much more than just picking the right license, hosting modular code and providing effective documentation. Success in advancing in an open collaborative way requires that the process match the expected code functionality to the developer's personal expertise and organizational needs as well as having an enthusiastic and responsive core lead group. We will present the lessons learned fromOOSTethys , which is a community of software developers and marine scientists who develop open source tools, in multiple languages, to integrate ocean observing systems into an Integrated Ocean Observing System (IOOS). OOSTethys' goal is to dramatically reduce the time it takes to install, adopt and update standards-compliant web services. OOSTethys has developed servers, clients and a registry. Open source PERL, PYTHON, JAVA and ASP tool kits and reference implementations are helping the marine community publish near real-time observation data in interoperable standard formats. In some cases publishing an OpenGeospatial Consortium (OGC), Sensor Observation Service (SOS) from NetCDF files or a database or even CSV text files could take only minutes depending on the skills of the developer. OOSTethys is also developing an OGC standard registry, Catalog Service for Web (CSW). This open source CSW registry was implemented to easily register and discover SOSs using ISO 19139 service metadata. A web interface layer over the CSW registry simplifies the registration process by harvesting metadata describing the observations and sensors from the “GetCapabilities” response of SOS. OPENIOOS is the web client, developed in PERL to visualize the sensors in the SOS services. While the number of OOSTethys software developers is small, currently about 10 around the world, the number of OOSTethys toolkit implementers is larger and growing and the ease of use has played a large role in spreading the use of interoperable standards compliant web services widely in the marine community.

  6. In Vivo Iron-Chelating Activity and Phenolic Profiles of the Angel's Wings Mushroom, Pleurotus porrigens (Higher Basidiomycetes).

    PubMed

    Khalili, Masoumeh; Ebrahimzadeh, Mohammad Ali; Kosaryan, Mehrnoush

    2015-01-01

    Pleurotus porrigens is an culinary-medicinal mushroom. It is locally called sadafi and is found in the northern regions of Iran, especially in Mazandaran. This mushroom is used to prepare a variety of local and specialty foods. Because of the phenol and flavonoid contents and the strong iron-chelating activity of this mushroom, it was selected for an assay of in vivo iron-chelating activity. Methanolic extract was administered intraperitoneally to iron-overloaded mice at two dosages (200 and 400 mg/kg/24 hours) for a total of 20 days, with a frequency of 5 times a week for 4 successive weeks. The total iron content was determined by atomic absorption spectroscopy. Plasma Fe3+ content was determined using a kit. Liver sections were stained by hematoxylin and eosin and Perls stain. A significant decrease in the plasma concentration of iron was observed in mice treated with extracts (P < 0.001). The animals showed a dramatic decrease in plasma Fe3+ content when compared with the control group (P < 0.001). Also, Perls stain improved the smaller amount of deposited iron in the liver of iron-overloaded mice treated with the extract. Liver sections revealed a marked reduction in the extent of necrotic hepatocytes, fibrous tissues, and pseudo-lobules. A high-performance liquid chromatography method was developed to simultaneously separate 7 phenolic acids in extract. Rutin (1.784 ± 0.052 mg g(-1) of extract) and p-coumaric acid (1.026 ± 0.043 mg g(-1) of extract) were detected as the main flavonoid and phenolic acids in extract, respectively. The extract exhibited satisfactory potency to chelate excessive iron in mice, potentially offering new natural alternatives to treat patients with iron overload. More studies are needed to determine which compounds are responsible for these biological activities.

  7. Development and Implementation of Dynamic Scripts to Support Local Model Verification at National Weather Service Weather Forecast Offices

    NASA Technical Reports Server (NTRS)

    Zavordsky, Bradley; Case, Jonathan L.; Gotway, John H.; White, Kristopher; Medlin, Jeffrey; Wood, Lance; Radell, Dave

    2014-01-01

    Local modeling with a customized configuration is conducted at National Weather Service (NWS) Weather Forecast Offices (WFOs) to produce high-resolution numerical forecasts that can better simulate local weather phenomena and complement larger scale global and regional models. The advent of the Environmental Modeling System (EMS), which provides a pre-compiled version of the Weather Research and Forecasting (WRF) model and wrapper Perl scripts, has enabled forecasters to easily configure and execute the WRF model on local workstations. NWS WFOs often use EMS output to help in forecasting highly localized, mesoscale features such as convective initiation, the timing and inland extent of lake effect snow bands, lake and sea breezes, and topographically-modified winds. However, quantitatively evaluating model performance to determine errors and biases still proves to be one of the challenges in running a local model. Developed at the National Center for Atmospheric Research (NCAR), the Model Evaluation Tools (MET) verification software makes performing these types of quantitative analyses easier, but operational forecasters do not generally have time to familiarize themselves with navigating the sometimes complex configurations associated with the MET tools. To assist forecasters in running a subset of MET programs and capabilities, the Short-term Prediction Research and Transition (SPoRT) Center has developed and transitioned a set of dynamic, easily configurable Perl scripts to collaborating NWS WFOs. The objective of these scripts is to provide SPoRT collaborating partners in the NWS with the ability to evaluate the skill of their local EMS model runs in near real time with little prior knowledge of the MET package. The ultimate goal is to make these verification scripts available to the broader NWS community in a future version of the EMS software. This paper provides an overview of the SPoRT MET scripts, instructions for how the scripts are run, and example use cases.

  8. Implementation of a Shared Resource Financial Management System

    PubMed Central

    Caldwell, T.; Gerlach, R.; Israel, M.; Bobin, S.

    2010-01-01

    CF-6 Norris Cotton Cancer Center (NCCC), an NCI-designated Comprehensive Cancer Center at Dartmouth Medical School, administers 12 Life Sciences Shared Resources. These resources are diverse and offer multiple products and services. Previous methods for tracking resource use, billing, and financial management were time consuming, error prone and lacked appropriate financial management tools. To address these problems, we developed and implemented a web-based application with a built-in authorization system that uses Perl, ModPerl, Apache2, and Oracle as the software infrastructure. The application uses a role-based system to differentiate administrative users with those requesting services and includes many features requested by users and administrators. To begin development, we chose a resource that had an uncomplicated service, a large number of users, and required the use of all of the applications features. The Molecular Biology Core Facility at NCCC fit these requirements and was used as a model for developing and testing the application. After model development, institution wide deployment followed a three-stage process. The first stage was to interview the resource manager and staff to understand day-to-day operations. At the second stage, we generated and tested customized forms defining resource services. During the third stage, we added new resource users and administrators to the system before final deployment. Twelve months after deployment, resource administrators reported that the new system performed well for internal and external billing and tracking resource utilization. Users preferred the application's web-based system for distribution of DNA sequencing and other data. The sample tracking features have enhanced day-to-day resource operations, and an on-line scheduling module for shared instruments has proven a much-needed utility. Principal investigators now are able to restrict user spending to specific accounts and have final approval of the invoices before the billing, which has significantly reduced the number of unpaid invoices.

  9. The Ensembl Web Site: Mechanics of a Genome Browser

    PubMed Central

    Stalker, James; Gibbins, Brian; Meidl, Patrick; Smith, James; Spooner, William; Hotz, Hans-Rudolf; Cox, Antony V.

    2004-01-01

    The Ensembl Web site (http://www.ensembl.org/) is the principal user interface to the data of the Ensembl project, and currently serves >500,000 pages (∼2.5 million hits) per week, providing access to >80 GB (gigabyte) of data to users in more than 80 countries. Built atop an open-source platform comprising Apache/mod_perl and the MySQL relational database management system, it is modular, extensible, and freely available. It is being actively reused and extended in several different projects, and has been downloaded and installed in companies and academic institutions worldwide. Here, we describe some of the technical features of the site, with particular reference to its dynamic configuration that enables it to handle disparate data from multiple species. PMID:15123591

  10. Calculation of streamflow statistics for Ontario and the Great Lakes states

    USGS Publications Warehouse

    Piggott, Andrew R.; Neff, Brian P.

    2005-01-01

    Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.

  11. The Ensembl Web site: mechanics of a genome browser.

    PubMed

    Stalker, James; Gibbins, Brian; Meidl, Patrick; Smith, James; Spooner, William; Hotz, Hans-Rudolf; Cox, Antony V

    2004-05-01

    The Ensembl Web site (http://www.ensembl.org/) is the principal user interface to the data of the Ensembl project, and currently serves >500,000 pages (approximately 2.5 million hits) per week, providing access to >80 GB (gigabyte) of data to users in more than 80 countries. Built atop an open-source platform comprising Apache/mod_perl and the MySQL relational database management system, it is modular, extensible, and freely available. It is being actively reused and extended in several different projects, and has been downloaded and installed in companies and academic institutions worldwide. Here, we describe some of the technical features of the site, with particular reference to its dynamic configuration that enables it to handle disparate data from multiple species.

  12. BioSmalltalk: a pure object system and library for bioinformatics.

    PubMed

    Morales, Hernán F; Giovambattista, Guillermo

    2013-09-15

    We have developed BioSmalltalk, a new environment system for pure object-oriented bioinformatics programming. Adaptive end-user programming systems tend to become more important for discovering biological knowledge, as is demonstrated by the emergence of open-source programming toolkits for bioinformatics in the past years. Our software is intended to bridge the gap between bioscientists and rapid software prototyping while preserving the possibility of scaling to whole-system biology applications. BioSmalltalk performs better in terms of execution time and memory usage than Biopython and BioPerl for some classical situations. BioSmalltalk is cross-platform and freely available (MIT license) through the Google Project Hosting at http://code.google.com/p/biosmalltalk hernan.morales@gmail.com Supplementary data are available at Bioinformatics online.

  13. Automated Sequence Processor: Something Old, Something New

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry

    2012-01-01

    High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..

  14. JLIFE: THE JEFFERSON LAB INTERACTIVE FRONT END FOR THE OPTICAL PROPAGATION CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Anne M.; Shinn, Michelle D.

    2013-08-01

    We present details on a graphical interface for the open source software program Optical Propagation Code, or OPC. This interface, written in Java, allows a user with no knowledge of OPC to create an optical system, with lenses, mirrors, apertures, etc. and the appropriate drifts between them. The Java code creates the appropriate Perl script that serves as the input for OPC. The mode profile is then output at each optical element. The display can be either an intensity profile along the x axis, or as an isometric 3D plot which can be tilted and rotated. These profiles can bemore » saved. Examples of the input and output will be presented.« less

  15. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  16. Gromita: a fully integrated graphical user interface to gromacs 4.

    PubMed

    Sellis, Diamantis; Vlachakis, Dimitrios; Vlassi, Metaxia

    2009-09-07

    Gromita is a fully integrated and efficient graphical user interface (GUI) to the recently updated molecular dynamics suite Gromacs, version 4. Gromita is a cross-platform, perl/tcl-tk based, interactive front end designed to break the command line barrier and introduce a new user-friendly environment to run molecular dynamics simulations through Gromacs. Our GUI features a novel workflow interface that guides the user through each logical step of the molecular dynamics setup process, making it accessible to both advanced and novice users. This tool provides a seamless interface to the Gromacs package, while providing enhanced functionality by speeding up and simplifying the task of setting up molecular dynamics simulations of biological systems. Gromita can be freely downloaded from http://bio.demokritos.gr/gromita/.

  17. New Directions in the NOAO Observing Proposal System

    NASA Astrophysics Data System (ADS)

    Gasson, David; Bell, Dave

    For the past eight years NOAO has been refining its on-line observing proposal system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form, or via the Gemini Phase I Tool. NOAO staff can use the system to do administrative tasks, scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available on-line, including the proposals themselves (in HTML, PDF and PostScript) and technical comments. Grades and TAC comments are entered and edited through web forms, and can be sorted and filtered according to specified criteria. Current developments include a move away from proprietary solutions, toward open standards such as SQL (in the form of the MySQL relational database system), Perl, PHP and XML.

  18. ViennaNGS: A toolbox for building efficient next- generation sequencing analysis pipelines

    PubMed Central

    Wolfinger, Michael T.; Fallmann, Jörg; Eggenhofer, Florian; Amman, Fabian

    2015-01-01

    Recent achievements in next-generation sequencing (NGS) technologies lead to a high demand for reuseable software components to easily compile customized analysis workflows for big genomics data. We present ViennaNGS, an integrated collection of Perl modules focused on building efficient pipelines for NGS data processing. It comes with functionality for extracting and converting features from common NGS file formats, computation and evaluation of read mapping statistics, as well as normalization of RNA abundance. Moreover, ViennaNGS provides software components for identification and characterization of splice junctions from RNA-seq data, parsing and condensing sequence motif data, automated construction of Assembly and Track Hubs for the UCSC genome browser, as well as wrapper routines for a set of commonly used NGS command line tools. PMID:26236465

  19. New strategic directions for Caribbean CSM project.

    PubMed

    1986-01-01

    Recent changes in the strategy of the Caribbean Contraceptive Social Marketing Project emphasize the condom, under the brand name, Panther. Since 1984, CCSMP began marketing their Perle rand of oral contraceptive, since dropped, in Barbados, St. Vincent and St. Lucia. Now wider commercial connections are envisioned, with support by CCSMP to promote generic brands. The Panther condom campaign will include an array of mass media, point-of-purchase and sporting event advertising. Pharmacies report that Panther is selling as well as the leading commercial brand. CCSMP is looking to introduce an ultra-thin condom and a vaginal foaming tablet. Market research, involving physicians and users as well as retail audits, indicates that although population in numbers alone is not a serious problem in the Caribbean, early pregnancy is a concern in the area.

  20. ParTIES: a toolbox for Paramecium interspersed DNA elimination studies.

    PubMed

    Denby Wilkes, Cyril; Arnaiz, Olivier; Sperling, Linda

    2016-02-15

    Developmental DNA elimination occurs in a wide variety of multicellular organisms, but ciliates are the only single-celled eukaryotes in which this phenomenon has been reported. Despite considerable interest in ciliates as models for DNA elimination, no standard methods for identification and characterization of the eliminated sequences are currently available. We present the Paramecium Toolbox for Interspersed DNA Elimination Studies (ParTIES), designed for Paramecium species, that (i) identifies eliminated sequences, (ii) measures their presence in a sequencing sample and (iii) detects rare elimination polymorphisms. ParTIES is multi-threaded Perl software available at https://github.com/oarnaiz/ParTIES. ParTIES is distributed under the GNU General Public Licence v3. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Generating and Executing Complex Natural Language Queries across Linked Data.

    PubMed

    Hamon, Thierry; Mougin, Fleur; Grabar, Natalia

    2015-01-01

    With the recent and intensive research in the biomedical area, the knowledge accumulated is disseminated through various knowledge bases. Links between these knowledge bases are needed in order to use them jointly. Linked Data, SPARQL language, and interfaces in Natural Language question-answering provide interesting solutions for querying such knowledge bases. We propose a method for translating natural language questions in SPARQL queries. We use Natural Language Processing tools, semantic resources, and the RDF triples description. The method is designed on 50 questions over 3 biomedical knowledge bases, and evaluated on 27 questions. It achieves 0.78 F-measure on the test set. The method for translating natural language questions into SPARQL queries is implemented as Perl module available at http://search.cpan.org/ thhamon/RDF-NLP-SPARQLQuery.

  2. Chado controller: advanced annotation management with a community annotation system.

    PubMed

    Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie

    2012-04-01

    We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary data are available at Bioinformatics online.

  3. MK3TOOLS & NetCDF - storing VLBI data in a machine independent array oriented data format

    NASA Astrophysics Data System (ADS)

    Hobiger, T.; Koyama, Y.; Kondo, T.

    2007-07-01

    In the beginning of 2002 the International VLBI Service (IVS) has agreed to introduce a Platform-independent VLBI exchange format (PIVEX) which permits the exchange of observational data and stimulates the research across different analysis groups. Unfortunately PIVEX has never been implemented and many analysis software packages are still depending on prior processing (e.g. ambiguity resolution and computation of ionosphere corrections) done by CALC/SOLVE. Thus MK3TOOLS which handles MK3 databases without CALC/SOLVE being installed has been developed. It uses the NetCDF format to store the data and since interfaces exist for a variety of programming languages (FORTRAN, C/C++, JAVA, Perl, Python) it can be easily incorporated in existing and upcoming analysis software packages.

  4. SEM (Symmetry Equivalent Molecules): a web-based GUI to generate and visualize the macromolecules

    PubMed Central

    Hussain, A. S. Z.; Kumar, Ch. Kiran; Rajesh, C. K.; Sheik, S. S.; Sekar, K.

    2003-01-01

    SEM, Symmetry Equivalent Molecules, is a web-based graphical user interface to generate and visualize the symmetry equivalent molecules (proteins and nucleic acids). In addition, the program allows the users to save the three-dimensional atomic coordinates of the symmetry equivalent molecules in the local machine. The widely recognized graphics program RasMol has been deployed to visualize the reference (input atomic coordinates) and the symmetry equivalent molecules. This program is written using CGI/Perl scripts and has been interfaced with all the three-dimensional structures (solved using X-ray crystallography) available in the Protein Data Bank. The program, SEM, can be accessed over the World Wide Web interface at http://dicsoft2.physics.iisc.ernet.in/sem/ or http://144.16.71.11/sem/. PMID:12824326

  5. Report on IVS-WG4

    NASA Astrophysics Data System (ADS)

    Gipson, John

    2011-07-01

    I describe the proposed data structure for storing, archiving and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including linux, Windows and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data, and also allows for extending the types of data used, e.g., source maps. I discuss the use of the new format in calc/solve and other VLBI analysis packages. I also discuss plans for transitioning to the new structure.

  6. Accessing near real-time Antarctic meteorological data through an OGC Sensor Observation Service (SOS)

    NASA Astrophysics Data System (ADS)

    Kirsch, Peter; Breen, Paul

    2013-04-01

    We wish to highlight outputs of a project conceived from a science requirement to improve discovery and access to Antarctic meteorological data in near real-time. Given that the data was distributed in both spatial and temporal domains and is to be accessed across several science disciplines, the creation of an interoperable, OGC compliant web service was deemed the most appropriate approach. We will demonstrate an implementation of the OGC SOS Interface Standard to discover, browse, and access Antarctic meteorological data-sets. A selection of programmatic (R, Perl) and web client interfaces utilizing open technologies ( e.g. jQuery, Flot, openLayers ) will be demonstrated. In addition we will show how high level abstractions can be constructed to allow the users flexible and straightforward access to SOS retrieved data.

  7. CommServer: A Communications Manager For Remote Data Sites

    NASA Astrophysics Data System (ADS)

    Irving, K.; Kane, D. L.

    2012-12-01

    CommServer is a software system that manages making connections to remote data-gathering stations, providing a simple network interface to client applications. The client requests a connection to a site by name, and the server establishes the connection, providing a bidirectional channel between the client and the target site if successful. CommServer was developed to manage networks of FreeWave serial data radios with multiple data sites, repeaters, and network-accessed base stations, and has been in continuous operational use for several years. Support for Iridium modems using RUDICS will be added soon, and no changes to the application interface are anticipated. CommServer is implemented on Linux using programs written in bash shell, Python, Perl, AWK, under a set of conventions we refer to as ThinObject.

  8. System for NIS Forecasting Based on Ensembles Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-01-02

    BMA-NIS is a package/library designed to be called by a script (e.g. Perl or Python). The software itself is written in the language of R. The software assists electric power delivery systems in planning resource availability and demand, based on historical data and current data variables. Net Interchange Schedule (NIS) is the algebraic sum of all energy scheduled to flow into or out of a balancing area during any interval. Accurate forecasts for NIS are important so that the Area Control Error (ACE) stays within an acceptable limit. To date, there are many approaches for forecasting NIS but all nonemore » of these are based on single models that can be sensitive to time of day and day of week effects.« less

  9. Development of a Response Planner using the UCT Algorithm for Cyber Defense

    DTIC Science & Technology

    2013-03-01

    writer2l, guess passwd r2l, imap r2l, ipsweep probe, land dos, loadmodule u2r, multihop r2l, neptune dos, nmap probe, perl u2r, phf r2l, pod dos, portsweep...2646 10 pod 201 11 back 956 12 guess passwd 53 Item Type Count 13 ftp write 8 14 multihop 7 15 rootkit 10 16 buffer overflow 30 17 imap 11 18...pod 0 0 0 87 6 11 0 0 64 33 0 0 0 0 k = back 908 0 0 0 0 0 0 0 0 0 47 0 1 0 l = guess passwd 0 0 0 42 3 0 1 0 0 0 0 5 1 0 m = buffer overflow 0 0 17

  10. Using NetCloak to develop server-side Web-based experiments without writing CGI programs.

    PubMed

    Wolfe, Christopher R; Reyna, Valerie F

    2002-05-01

    Server-side experiments use the Web server, rather than the participant's browser, to handle tasks such as random assignment, eliminating inconsistencies with JAVA and other client-side applications. Heretofore, experimenters wishing to create server-side experiments have had to write programs to create common gateway interface (CGI) scripts in programming languages such as Perl and C++. NetCloak uses simple, HTML-like commands to create CGIs. We used NetCloak to implement an experiment on probability estimation. Measurements of time on task and participants' IP addresses assisted quality control. Without prior training, in less than 1 month, we were able to use NetCloak to design and create a Web-based experiment and to help graduate students create three Web-based experiments of their own.

  11. HippDB: a database of readily targeted helical protein-protein interactions.

    PubMed

    Bergey, Christina M; Watkins, Andrew M; Arora, Paramjit S

    2013-11-01

    HippDB catalogs every protein-protein interaction whose structure is available in the Protein Data Bank and which exhibits one or more helices at the interface. The Web site accepts queries on variables such as helix length and sequence, and it provides computational alanine scanning and change in solvent-accessible surface area values for every interfacial residue. HippDB is intended to serve as a starting point for structure-based small molecule and peptidomimetic drug development. HippDB is freely available on the web at http://www.nyu.edu/projects/arora/hippdb. The Web site is implemented in PHP, MySQL and Apache. Source code freely available for download at http://code.google.com/p/helidb, implemented in Perl and supported on Linux. arora@nyu.edu.

  12. CancerNet redistribution via WWW.

    PubMed

    Quade, G; Püschel, N; Far, F

    1996-01-01

    CancerNet from the National Cancer Institute contains nearly 500 ASCII-files, updated monthly, with up-to-date information about cancer and the "Golden Standard" in tumor therapy. Perl scripts are used to convert these files to HTML-documents. A complex algorithm, using regular expression matching and extensive exception handling, detects headlines, listings and other constructs of the original ASCII-text and converts them into their HTML-counterparts. A table of contents is also created during the process. The resulting files are indexed for full-text search via WAIS. Building the complete CancerNet WWW redistribution takes less than two hours with a minimum of manual work. For 26,000 requests of information from our service per month the average costs for the worldwide delivery of one document is about 19 cents.

  13. Web-based biobank system infrastructure monitoring using Python, Perl, and PHP.

    PubMed

    Norling, Martin; Kihara, Absolomon; Kemp, Steve

    2013-12-01

    The establishment and maintenance of biobanks is only as worthwhile as the security and logging of the biobank contents. We have designed a monitoring system that continuously measures temperature and gas content, records the movement of samples in and out of the biobank, and also records the opening and closing of the freezers-storing the results and images in a database. We have also incorporated an early warning feature that sends out alerts, via SMS and email, to responsible persons if any measurement is recorded outside the acceptable limits, guaranteeing the integrity of biobanked samples, as well as reagents used in sample analysis. A surveillance system like this increases the value for any biobank as the initial investment is small and the value of having trustworthy samples for future research is high.

  14. ViralEpi v1.0: a high-throughput spectrum of viral epigenomic methylation profiles from diverse diseases.

    PubMed

    Khan, Mohd Shoaib; Gupta, Amit Kumar; Kumar, Manoj

    2016-01-01

    To develop a computational resource for viral epigenomic methylation profiles from diverse diseases. Methylation patterns of Epstein-Barr virus and hepatitis B virus genomic regions are provided as web platform developed using open source Linux-Apache-MySQL-PHP (LAMP) bundle: programming and scripting languages, that is, HTML, JavaScript and PERL. A comprehensive and integrated web resource ViralEpi v1.0 is developed providing well-organized compendium of methylation events and statistical analysis associated with several diseases. Additionally, it also facilitates 'Viral EpiGenome Browser' for user-affable browsing experience using JavaScript-based JBrowse. This web resource would be helpful for research community engaged in studying epigenetic biomarkers for appropriate prognosis and diagnosis of diseases and its various stages.

  15. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  16. “One code to find them all”: a perl tool to conveniently parse RepeatMasker output files

    PubMed Central

    2014-01-01

    Background Of the different bioinformatic methods used to recover transposable elements (TEs) in genome sequences, one of the most commonly used procedures is the homology-based method proposed by the RepeatMasker program. RepeatMasker generates several output files, including the .out file, which provides annotations for all detected repeats in a query sequence. However, a remaining challenge consists of identifying the different copies of TEs that correspond to the identified hits. This step is essential for any evolutionary/comparative analysis of the different copies within a family. Different possibilities can lead to multiple hits corresponding to a unique copy of an element, such as the presence of large deletions/insertions or undetermined bases, and distinct consensus corresponding to a single full-length sequence (like for long terminal repeat (LTR)-retrotransposons). These possibilities must be taken into account to determine the exact number of TE copies. Results We have developed a perl tool that parses the RepeatMasker .out file to better determine the number and positions of TE copies in the query sequence, in addition to computing quantitative information for the different families. To determine the accuracy of the program, we tested it on several RepeatMasker .out files corresponding to two organisms (Drosophila melanogaster and Homo sapiens) for which the TE content has already been largely described and which present great differences in genome size, TE content, and TE families. Conclusions Our tool provides access to detailed information concerning the TE content in a genome at the family level from the .out file of RepeatMasker. This information includes the exact position and orientation of each copy, its proportion in the query sequence, and its quality compared to the reference element. In addition, our tool allows a user to directly retrieve the sequence of each copy and obtain the same detailed information at the family level when a local library with incomplete TE class/subclass information was used with RepeatMasker. We hope that this tool will be helpful for people working on the distribution and evolution of TEs within genomes.

  17. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    PubMed Central

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.

    2016-01-01

    Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387

  18. Innovative Technology for Teaching Introductory Astronomy

    NASA Astrophysics Data System (ADS)

    Guidry, Mike

    The application of state-of-the-art technology (primarily Java and Flash MX Actionscript on the client side and Java PHP PERL XML and SQL databasing on the server side) to the teaching of introductory astronomy will be discussed. A completely online syllabus in introductory astronomy built around more than 350 interactive animations called ""Online Journey through Astronomy"" and a new set of 20 online virtual laboratories in astronomy that we are currently developing will be used as illustration. In addition to demonstration of the technology our experience using these technologies to teach introductory astronomy to thousands of students in settings ranging from traditional classrooms to full distance learning will be summarized. Recent experiments using Java and vector graphics programming of handheld devices (Personal Digital Assistants and cell phones) with wireless wide-area connectivity for applications in astronomy education will also be described.

  19. Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.

    PubMed

    Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz

    2017-03-01

    Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. Trends in surface ozone concentrations at Arosa (Switzerland)

    NASA Astrophysics Data System (ADS)

    Staehelin, Johannes; Thudium, Juerg; Buehler, Ralph; Volz-Thomas, Andreas; Graber, Werner

    During the years 1989-1991, ozone was measured at four sites around Arosa (Switzerland). One of these sites was identical with that, where surface ozone was measured in the 1950s (Götz and Volz, 1951; Perl, 1965). Comparison of both old and recent data indicates that surface ozone concentrations at Arosa have increased by a factor of approximately 2.2. The increase shows a seasonal variation with a relative increase of more than a factor of three in December and January. The results are discussed in the context of measurements made at other times, locations and altitudes. The comparison indicates that the increase in ozone levels at Arosa has most likely occured between the fifties and today. The measurements additionally suggest that photochemical ozone production in the free troposphere has significantly contributed to the observed ozone trends in winter.

  1. Aerobraking Maneuver (ABM) Report Generator

    NASA Technical Reports Server (NTRS)

    Fisher, Forrest; Gladden, Roy; Khanampornpan, Teerapat

    2008-01-01

    abmREPORT Version 3.1 is a Perl script that extracts vital summarization information from the Mars Reconnaissance Orbiter (MRO) aerobraking ABM build process. This information facilitates sequence reviews, and provides a high-level summarization of the sequence for mission management. The script extracts information from the ENV, SSF, FRF, SCMFmax, and OPTG files and burn magnitude configuration files and presents them in a single, easy-to-check report that provides the majority of the parameters necessary for cross check and verification during the sequence review process. This means that needed information, formerly spread across a number of different files and each in a different format, is all available in this one application. This program is built on the capabilities developed in dragReport and then the scripts evolved as the two tools continued to be developed in parallel.

  2. A new information architecture, website and services for the CMS experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe themore » information architecture, the system design, implementation and monitoring, the document and content database, security aspects, and our deployment strategy, which ensured continual smooth operation of all systems at all times.« less

  3. A new Information Architecture, Website and Services for the CMS Experiment

    NASA Astrophysics Data System (ADS)

    Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas

    2012-12-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  4. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Gregory S.; Nickless, William K.; Thiede, David R.

    Enterprise level cyber security requires the deployment, operation, and monitoring of many sensors across geographically dispersed sites. Communicating with the sensors to gather data and control behavior is a challenging task when the number of sensors is rapidly growing. This paper describes the system requirements, design, and implementation of T3, the third generation of our transport software that performs this task. T3 relies on open source software and open Internet standards. Data is encoded in MIME format messages and transported via NNTP, which provides scalability. OpenSSL and public key cryptography are used to secure the data. Robustness and ease ofmore » development are increased by defining an internal cryptographic API, implemented by modules in C, Perl, and Python. We are currently using T3 in a production environment. It is freely available to download and use for other projects.« less

  6. [The development and evaluation of software to verify diagnostic accuracy].

    PubMed

    Jensen, Rodrigo; de Moraes Lopes, Maria Helena Baena; Silveira, Paulo Sérgio Panse; Ortega, Neli Regina Siqueira

    2012-02-01

    This article describes the development and evaluation of software that verifies the accuracy of diagnoses made by nursing students. The software was based on a model that uses fuzzy logic concepts, including PERL, the MySQL database for Internet accessibility, and the NANDA-I 2007-2008 classification system. The software was evaluated in terms of its technical quality and usability through specific instruments. The activity proposed in the software involves four stages in which students establish the relationship values between nursing diagnoses, defining characteristics/risk factors and clinical cases. The relationship values determined by students are compared to those of specialists, generating performance scores for the students. In the evaluation, the software demonstrated satisfactory outcomes regarding the technical quality and, according to the students, helped in their learning and may become an educational tool to teach the process of nursing diagnosis.

  7. Chado Controller: advanced annotation management with a community annotation system

    PubMed Central

    Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie

    2012-01-01

    Summary: We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. Availability: The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form Contact: valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22285827

  8. ABMapper: a suffix array-based tool for multi-location searching and splice-junction mapping.

    PubMed

    Lou, Shao-Ke; Ni, Bing; Lo, Leung-Yau; Tsui, Stephen Kwok-Wing; Chan, Ting-Fung; Leung, Kwong-Sak

    2011-02-01

    Sequencing reads generated by RNA-sequencing (RNA-seq) must first be mapped back to the genome through alignment before they can be further analyzed. Current fast and memory-saving short-read mappers could give us a quick view of the transcriptome. However, they are neither designed for reads that span across splice junctions nor for repetitive reads, which can be mapped to multiple locations in the genome (multi-reads). Here, we describe a new software package: ABMapper, which is specifically designed for exploring all putative locations of reads that are mapped to splice junctions or repetitive in nature. The software is freely available at: http://abmapper.sourceforge.net/. The software is written in C++ and PERL. It runs on all major platforms and operating systems including Windows, Mac OS X and LINUX.

  9. Can Aerosol Direct Radiative Effects Account for Analysis Increments of Temperature in the Tropical Atlantic?

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Alpert, Pinhas

    2016-01-01

    In the late 1990's, prior to the launch of the Terra satellite, atmospheric general circulation models (GCMs) did not include aerosol processes because aerosols were not properly monitored on a global scale and their spatial distributions were not known well enough for their incorporation in operational GCMs. At the time of the first GEOS Reanalysis (Schubert et al. 1993), long time series of analysis increments (the corrections to the atmospheric state by all available meteorological observations) became readily available, enabling detailed analysis of the GEOS-1 errors on a global scale. Such analysis revealed that temperature biases were particularly pronounced in the Tropical Atlantic region, with patterns depicting a remarkable similarity to dust plumes emanating from the African continent as evidenced by TOMS aerosol index maps. Yoram Kaufman was instrumental encouraging us to pursue this issue further, resulting in the study reported in Alpert et al. (1998) where we attempted to assess aerosol forcing by studying the errors of a the GEOS-1 GCM without aerosol physics within a data assimilation system. Based on this analysis, Alpert et al. (1998) put forward that dust aerosols are an important source of inaccuracies in numerical weather-prediction models in the Tropical Atlantic region, although a direct verification of this hypothesis was not possible back then. Nearly 20 years later, numerical prediction models have increased in resolution and complexity of physical parameterizations, including the representation of aerosols and their interactions with the circulation. Moreover, with the advent of NASA's EOS program and subsequent satellites, atmospheric aerosols are now monitored globally on a routine basis, and their assimilation in global models are becoming well established. In this talk we will reexamine the Alpert et al. (1998) hypothesis using the most recent version of the GEOS-5 Data Assimilation System with assimilation of aerosols. We will explicitly calculate the impact of aerosols on the temperature analysis increments in the tropical Atlantic and assess the extent to which inclusion of atmospheric aerosols have reduced these increments.

  10. A new approach for annotation of transposable elements using small RNA mapping

    PubMed Central

    El Baidouri, Moaine; Kim, Kyung Do; Abernathy, Brian; Arikit, Siwaret; Maumus, Florian; Panaud, Olivier; Meyers, Blake C.; Jackson, Scott A.

    2015-01-01

    Transposable elements (TEs) are mobile genomic DNA sequences found in most organisms. They so densely populate the genomes of many eukaryotic species that they are often the major constituents. With the rapid generation of many plant genome sequencing projects over the past few decades, there is an urgent need for improved TE annotation as a prerequisite for genome-wide studies. Analogous to the use of RNA-seq for gene annotation, we propose a new method for de novo TE annotation that uses as a guide 24 nt-siRNAs that are a part of TE silencing pathways. We use this new approach, called TASR (for Transposon Annotation using Small RNAs), for de novo annotation of TEs in Arabidopsis, rice and soybean and demonstrate that this strategy can be successfully applied for de novo TE annotation in plants. Executable PERL is available for download from: http://tasr-pipeline.sourceforge.net/ PMID:25813049

  11. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.

    PubMed

    Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G

    2016-03-01

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. ABACAS: algorithm-based automatic contiguation of assembled sequences

    PubMed Central

    Assefa, Samuel; Keane, Thomas M.; Otto, Thomas D.; Newbold, Chris; Berriman, Matthew

    2009-01-01

    Summary: Due to the availability of new sequencing technologies, we are now increasingly interested in sequencing closely related strains of existing finished genomes. Recently a number of de novo and mapping-based assemblers have been developed to produce high quality draft genomes from new sequencing technology reads. New tools are necessary to take contigs from a draft assembly through to a fully contiguated genome sequence. ABACAS is intended as a tool to rapidly contiguate (align, order, orientate), visualize and design primers to close gaps on shotgun assembled contigs based on a reference sequence. The input to ABACAS is a set of contigs which will be aligned to the reference genome, ordered and orientated, visualized in the ACT comparative browser, and optimal primer sequences are automatically generated. Availability and Implementation: ABACAS is implemented in Perl and is freely available for download from http://abacas.sourceforge.net Contact: sa4@sanger.ac.uk PMID:19497936

  13. Webmail: an Automated Web Publishing System

    NASA Astrophysics Data System (ADS)

    Bell, David

    A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.

  14. Geospatial Authentication

    NASA Technical Reports Server (NTRS)

    Lyle, Stacey D.

    2009-01-01

    A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.

  15. Cell illustrator 4.0: a computational platform for systems biology.

    PubMed

    Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru

    2011-01-01

    Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.

  16. The Biological Reference Repository (BioR): a rapid and flexible system for genomics annotation.

    PubMed

    Kocher, Jean-Pierre A; Quest, Daniel J; Duffy, Patrick; Meiners, Michael A; Moore, Raymond M; Rider, David; Hossain, Asif; Hart, Steven N; Dinu, Valentin

    2014-07-01

    The Biological Reference Repository (BioR) is a toolkit for annotating variants. BioR stores public and user-specific annotation sources in indexed JSON-encoded flat files (catalogs). The BioR toolkit provides the functionality to combine and retrieve annotation from these catalogs via the command-line interface. Several catalogs from commonly used annotation sources and instructions for creating user-specific catalogs are provided. Commands from the toolkit can be combined with other UNIX commands for advanced annotation processing. We also provide instructions for the development of custom annotation pipelines. The package is implemented in Java and makes use of external tools written in Java and Perl. The toolkit can be executed on Mac OS X 10.5 and above or any Linux distribution. The BioR application, quickstart, and user guide documents and many biological examples are available at http://bioinformaticstools.mayo.edu. © The Author 2014. Published by Oxford University Press.

  17. Installing the Unix Starlink Software

    NASA Astrophysics Data System (ADS)

    Bly, M. J.

    This note is the release note and installation instructions for the DEC Alpha AXP / Digital UNIX, Sun Sparc / Solaris v2.x, and Sun Sparc / SunOS 4.1.x versions of the Starlink Software Collection (USSC). You will be supplied with pre-built (and installed) versions on tape and will just need to copy the tape to disk to have a working version. The tapes (where appropriate) will contain in addition, copies of the NAG and MEMSYS libraries, and Tcl, Tk, Expect, Mosaic, TeX, Pine, Perl, Jed, Ispell, Ghostscript, LaXeX2html and Ftnchek for the relevant system. The Sun Sparc SunOS 4.1.x version of the USSC was frozen at USSC111 and no further updates are available. The instructions for installing the main section of the USSC may continue to be used for installing Sun Sparc SunOS 4.1.x version.

  18. Production Experiences with the Cray-Enabled TORQUE Resource Manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, Matthew A; Maxwell, Don E; Beer, David

    High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less

  19. nuMap: A Web Platform for Accurate Prediction of Nucleosome Positioning

    PubMed Central

    Alharbi, Bader A.; Alshammari, Thamir H.; Felton, Nathan L.; Zhurkin, Victor B.; Cui, Feng

    2014-01-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945

  20. ISRNA: an integrative online toolkit for short reads from high-throughput sequencing data.

    PubMed

    Luo, Guan-Zheng; Yang, Wei; Ma, Ying-Ke; Wang, Xiu-Jie

    2014-02-01

    Integrative Short Reads NAvigator (ISRNA) is an online toolkit for analyzing high-throughput small RNA sequencing data. Besides the high-speed genome mapping function, ISRNA provides statistics for genomic location, length distribution and nucleotide composition bias analysis of sequence reads. Number of reads mapped to known microRNAs and other classes of short non-coding RNAs, coverage of short reads on genes, expression abundance of sequence reads as well as some other analysis functions are also supported. The versatile search functions enable users to select sequence reads according to their sub-sequences, expression abundance, genomic location, relationship to genes, etc. A specialized genome browser is integrated to visualize the genomic distribution of short reads. ISRNA also supports management and comparison among multiple datasets. ISRNA is implemented in Java/C++/Perl/MySQL and can be freely accessed at http://omicslab.genetics.ac.cn/ISRNA/.

  1. ADASS Web Database XML Project

    NASA Astrophysics Data System (ADS)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  2. nuMap: a web platform for accurate prediction of nucleosome positioning.

    PubMed

    Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng

    2014-10-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. Copyright © 2014 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  3. EasyModeller: A graphical interface to MODELLER

    PubMed Central

    2010-01-01

    Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system. PMID:20712861

  4. Suggestions for Improvement of User Access to GOCE L2 Data

    NASA Astrophysics Data System (ADS)

    Tscherning, C. C.

    2011-07-01

    ESA's has required that most GOCE L2 products are delivered in XML format. This creates difficulties for the users because a Parser written in Perl is needed to convert the files to files without XML tags. However several products, such as the coefficients of spherical harmonic coefficients are made available on standard form through the International Center for Global Gravity Field Models. The variance-covariance information for the gravity field models is only available without XML tags. It is suggested that all XML products are made available in the Virtual Data Archive as files without tags. This will besides making the data directly usable by a FORTRAN program also reduce the size (storage requirements) of the product to about 30 %. A further reduction of used storage should be made by tuning the number of digits for the individual quantities in the products, so that it corresponds to the actual number of significant digits.

  5. Documenting AUTOGEN and APGEN Model Files

    NASA Technical Reports Server (NTRS)

    Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.; DelGuericio, Chris c.

    2008-01-01

    A computer program called "autogen hypertext map generator" satisfies a need for documenting and assisting in visualization of, and navigation through, model files used in the AUTOGEN and APGEN software mentioned in the two immediately preceding articles. This program parses autogen script files, autogen model files, PERL scripts, and apgen activity-definition files and produces a hypertext map of the files to aid in the navigation of the model. This program also provides a facility for adding notes and descriptions, beyond what is in the source model represented by the hypertext map. Further, this program provides access to a summary of the model through variable, function, sub routine, activity and resource declarations as well as providing full access to the source model and source code. The use of the tool enables easy access to the declarations and the ability to traverse routines and calls while analyzing the model.

  6. Replacement Sequence of Events Generator

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladden, Daniel Wenkert Roy; Khanampompan, Teerpat

    2008-01-01

    The soeWINDOW program automates the generation of an ITAR (International Traffic in Arms Regulations)-compliant sub-RSOE (Replacement Sequence of Events) by extracting a specified temporal window from an RSOE while maintaining page header information. RSOEs contain a significant amount of information that is not ITAR-compliant, yet that foreign partners need to see for command details to their instrument, as well as the surrounding commands that provide context for validation. soeWINDOW can serve as an example of how command support products can be made ITAR-compliant for future missions. This software is a Perl script intended for use in the mission operations UNIX environment. It is designed for use to support the MRO (Mars Reconnaissance Orbiter) instrument team. The tool also provides automated DOM (Distributed Object Manager) storage into the special ITAR-okay DOM collection, and can be used for creating focused RSOEs for product review by any of the MRO teams.

  7. Ferrenberg Swendsen Analysis of LLNL and NYBlue BG/L p4rhms Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltz, R

    2007-12-05

    These results are from the continuing Lattice Quantum Chromodynamics runs on BG/L. These results are from the Ferrenberg-Swendsen analysis [?] of the combined data from LLNL and NYBlue BG/L runs for 32{sup 3} x 8 runs with the p4rhmc v2.0 QMP-MPI.X (semi-optimized p4 code using qmp over mpi). The jobs include beta values ranging from 3.525 to 3.535 with an alternate analysis extending to 3.540. The NYBlue data sets are from 9k trajectories from Oct 2007, and the LLNL data are from two independent streams of {approx}5k each, taking from the July 2007 runs. The following outputs are produced bymore » the fs-2+1-chiub.c program. All outputs have had checksums produced by addCks.pl and checked by the checkCks.pl perl script after scanning.« less

  8. GenePRIMP: A Gene Prediction Improvement Pipeline For Prokaryotic Genomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyrpides, Nikos C.; Ivanova, Natalia N.; Pati, Amrita

    2010-07-08

    GenePRIMP (Gene Prediction Improvement Pipeline, Http://geneprimp.jgi-psf.org), a computational process that performs evidence-based evaluation of gene models in prokaryotic genomes and reports anomalies including inconsistent start sites, missing genes, and split genes. We show that manual curation of gene models using the anomaly reports generated by GenePRIMP improves their quality and demonstrate the applicability of GenePRIMP in improving finishing quality and comparing different genome sequencing and annotation technologies. Keywords in context: Gene model, Quality Control, Translation start sites, Automatic correction. Hardware requirements; PC, MAC; Operating System: UNIX/LINUX; Compiler/Version: Perl 5.8.5 or higher; Special requirements: NCBI Blast and nr installation; File Types:more » Source Code, Executable module(s), Sample problem input data; installation instructions other; programmer documentation. Location/transmission: http://geneprimp.jgi-psf.org/gp.tar.gz« less

  9. Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.

    PubMed

    Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua

    2011-05-15

    High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.

  10. Does Altered Uric Acid Metabolism Contribute to Diabetic Kidney Disease Pathophysiology?

    PubMed

    Gul, Ambreen; Zager, Philip

    2018-03-01

    Multiple experimental and clinical studies have identified pathways by which uric acid may facilitate the development and progression of chronic kidney disease (CKD) in people with diabetes. However, it remains uncertain if the association of uric acid with CKD represents a pathogenic effect or merely reflects renal impairment. In contrast to many published reports, a recent Mendelian randomization study did not identify a causal link between uric acid and CKD in people with type 1 diabetes. Two recent multicenter randomized control trials, Preventing Early Renal Function Loss in Diabetes (PERL) and FEbuxostat versus placebo rAndomized controlled Trial regarding reduced renal function in patients with Hyperuricemia complicated by chRonic kidney disease stage 3 (FEATHER), were recently designed to assess if uric acid lowering slows progression of CKD. We review the evidence supporting a role for uric acid in the pathogenesis of CKD in people with diabetes and the putative benefits of uric acid lowering.

  11. A TALE-inspired computational screen for proteins that contain approximate tandem repeats.

    PubMed

    Perycz, Malgorzata; Krwawicz, Joanna; Bochtler, Matthias

    2017-01-01

    TAL (transcription activator-like) effectors (TALEs) are bacterial proteins that are secreted from bacteria to plant cells to act as transcriptional activators. TALEs and related proteins (RipTALs, BurrH, MOrTL1 and MOrTL2) contain approximate tandem repeats that differ in conserved positions that define specificity. Using PERL, we screened ~47 million protein sequences for TALE-like architecture characterized by approximate tandem repeats (between 30 and 43 amino acids in length) and sequence variability in conserved positions, without requiring sequence similarity to TALEs. Candidate proteins were scored according to their propensity for nuclear localization, secondary structure, repeat sequence complexity, as well as covariation and predicted structural proximity of variable residues. Biological context was tentatively inferred from co-occurrence of other domains and interactome predictions. Approximate repeats with TALE-like features that merit experimental characterization were found in a protein of chestnut blight fungus, a eukaryotic plant pathogen.

  12. A TALE-inspired computational screen for proteins that contain approximate tandem repeats

    PubMed Central

    Krwawicz, Joanna

    2017-01-01

    TAL (transcription activator-like) effectors (TALEs) are bacterial proteins that are secreted from bacteria to plant cells to act as transcriptional activators. TALEs and related proteins (RipTALs, BurrH, MOrTL1 and MOrTL2) contain approximate tandem repeats that differ in conserved positions that define specificity. Using PERL, we screened ~47 million protein sequences for TALE-like architecture characterized by approximate tandem repeats (between 30 and 43 amino acids in length) and sequence variability in conserved positions, without requiring sequence similarity to TALEs. Candidate proteins were scored according to their propensity for nuclear localization, secondary structure, repeat sequence complexity, as well as covariation and predicted structural proximity of variable residues. Biological context was tentatively inferred from co-occurrence of other domains and interactome predictions. Approximate repeats with TALE-like features that merit experimental characterization were found in a protein of chestnut blight fungus, a eukaryotic plant pathogen. PMID:28617832

  13. RNA-Seq-Based Transcript Structure Analysis with TrBorderExt.

    PubMed

    Wang, Yejun; Sun, Ming-An; White, Aaron P

    2018-01-01

    RNA-Seq has become a routine strategy for genome-wide gene expression comparisons in bacteria. Despite lower resolution in transcript border parsing compared with dRNA-Seq, TSS-EMOTE, Cappable-seq, Term-seq, and others, directional RNA-Seq still illustrates its advantages: low cost, quantification and transcript border analysis with a medium resolution (±10-20 nt). To facilitate mining of directional RNA-Seq datasets especially with respect to transcript structure analysis, we developed a tool, TrBorderExt, which can parse transcript start sites and termination sites accurately in bacteria. A detailed protocol is described in this chapter for how to use the software package step by step to identify bacterial transcript borders from raw RNA-Seq data. The package was developed with Perl and R programming languages, and is accessible freely through the website: http://www.szu-bioinf.org/TrBorderExt .

  14. DIEGO: detection of differential alternative splicing using Aitchison's geometry.

    PubMed

    Doose, Gero; Bernhart, Stephan H; Wagener, Rabea; Hoffmann, Steve

    2018-03-15

    Alternative splicing is a biological process of fundamental importance in most eukaryotes. It plays a pivotal role in cell differentiation and gene regulation and has been associated with a number of different diseases. The widespread availability of RNA-Sequencing capacities allows an ever closer investigation of differentially expressed isoforms. However, most tools for differential alternative splicing (DAS) analysis do not take split reads, i.e. the most direct evidence for a splice event, into account. Here, we present DIEGO, a compositional data analysis method able to detect DAS between two sets of RNA-Seq samples based on split reads. The python tool DIEGO works without isoform annotations and is fast enough to analyze large experiments while being robust and accurate. We provide python and perl parsers for common formats. The software is available at: www.bioinf.uni-leipzig.de/Software/DIEGO. steve@bioinf.uni-leipzig.de. Supplementary data are available at Bioinformatics online.

  15. Cell Illustrator 4.0: a computational platform for systems biology.

    PubMed

    Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru

    2010-01-01

    Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.

  16. Automatic Command Sequence Generation

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat

    2007-01-01

    Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.

  17. Whole-Body Diffusion-weighted MR Imaging of Iron Deposits in Hodgkin, Follicular, and Diffuse Large B-Cell Lymphoma.

    PubMed

    Cottereau, Anne-Ségolène; Mulé, Sébastien; Lin, Chieh; Belhadj, Karim; Vignaud, Alexandre; Copie-Bergman, Christiane; Boyez, Alice; Zerbib, Pierre; Tacher, Vania; Scherman, Elodie; Haioun, Corinne; Luciani, Alain; Itti, Emmanuel; Rahmouni, Alain

    2018-02-01

    Purpose To analyze the frequency and distribution of low-signal-intensity regions (LSIRs) in lymphoma lesions and to compare these to fluorodeoxyglucose (FDG) uptake and biologic markers of inflammation. Materials and Methods The authors analyzed 61 untreated patients with a bulky lymphoma (at least one tumor mass ≥7 cm in diameter). When a LSIR within tumor lesions was detected on diffusion-weighted images obtained with a b value of 50 sec/mm 2 , a T2-weighted gradient-echo (GRE) sequence was performed and calcifications were searched for with computed tomography (CT). In two patients, Perls staining was performed on tissue samples from the LSIR. LSIRs were compared with biologic inflammatory parameters and baseline FDG positon emission tomography (PET)/CT parameters (maximum standardized uptake value [SUV max ], total metabolic tumor volume [TMTV]). Results LSIRs were detected in 22 patients and corresponded to signal void on GRE images; one LSIR was due to calcifications, and three LSIRS were due to a recent biopsy. In 18 patients, LSIRs appeared to be related to focal iron deposits; this was proven with Perls staining in two patients. The LSIRs presumed to be due to iron deposits were found mostly in patients with aggressive lymphoma (nine of 26 patients with Hodgkin lymphoma and eight of 20 patients with diffuse large B-cell lymphoma vs one of 15 patients with follicular lymphoma; P = .047) and with advanced stage disease (15 of 18 patients). LSIRS were observed in spleen (n = 14), liver (n = 3), and nodal (n = 8) lesions and corresponded to foci FDG uptake, with mean SUV max of 9.8, 6.7, and 16.2, respectively. These patients had significantly higher serum levels of C-reactive protein, α 1 -globulin, and α 2 -globulin and more frequently had microcytic anemia than those without such deposits (P = .0072, P = .003, P = .0068, and P < .0001, respectively). They also had a significantly higher TMTV (P = .0055) and higher levels of spleen involvement (P < .0001). Conclusion LSIRs due to focal iron deposits are detected in lymphoma lesions and are associated with a more pronounced biologic inflammatory syndrome. © RSNA, 2017 Online supplemental material is available for this article.

  18. Morpho-histology of head kidney of female catfish Heteropneustes fossilis: seasonal variations in melano-macrophage centers, melanin contents and effects of lipopolysaccharide and dexamethasone on melanins.

    PubMed

    Kumar, Ravi; Joy, K P; Singh, S M

    2016-10-01

    In the catfish Heteropneustes fossilis, the anterior kidney is a hemopoietic tissue which surrounds the adrenal homologues, interrenal (IR) and chromaffin tissues corresponding to the adrenal cortical and adrenal medulla of higher mammals. The IR tissue is arranged in cell cords around the posterior cardinal vein (PCV) and its tributaries and secretes corticosteroids. The chromaffin tissue is scattered singly or in nests of one or more cells around the epithelial lining of the PCV or blood capillaries within the IR tissue. They are ferric ferricyanide-positive. Leukemia-inhibitory factor (LIF)-like reactivity was noticed in the lining of the epithelium of the IR cell cords and around the wall of the PCV and blood capillaries. No staining was observed in the hemopoietic cells. IL-1β- and TNF-α-like immunoreactivity was seen in certain cells in the hemopoietic tissue but not in the IR region. Macrophages were identified with mammalian macrophage-specific MAC387 antibodies and are present in the hemopoietic mass but not in the IR tissue. Pigments accumulate in the hemopoietic mass as melano-macrophage centers (MMCs) and are PAS-, Schmorl's- and Perls'-positive. The pigments contain melanin (black), hemosiderin (blue) and lipofuscin/ceroid (oxidized lipid, yellowish tan), as evident from the Perls' reaction. The MMCs were TUNEL-positive as evident from FITC fluorescence, indicating their apoptotic nature. The MMCs showed significant seasonal variation with their density increasing to the peak in the postspawning phase. Melanins were characterized spectrophotometrically for the first time in fish anterior kidney. The predominant form is pheomelanin (PM), followed by eumelanin (EM) and alkali-soluble melanin (ASM). Melanins showed significant seasonal variations with the level low in the resting phase and increasing to the peak in the postspawning phase. Under in vitro conditions, lipopolysaccharide (10 µg/mL) treatment increased significantly the levels of PM and EM levels both at 16 and at 32 h and the ASM level at 32 h. On the other hand, the synthetic glucocorticoid dexamethasone (100 nM) decreased significantly the levels of EM, PM and ASM time-dependently. The results indicate that the anterior kidney is an important site of immune-endocrine interaction.

  19. LCG MCDB—a knowledgebase of Monte-Carlo simulated events

    NASA Astrophysics Data System (ADS)

    Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.

    2008-02-01

    In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.

  20. Verdant: automated annotation, alignment and phylogenetic analysis of whole chloroplast genomes.

    PubMed

    McKain, Michael R; Hartsock, Ryan H; Wohl, Molly M; Kellogg, Elizabeth A

    2017-01-01

    Chloroplast genomes are now produced in the hundreds for angiosperm phylogenetics projects, but current methods for annotation, alignment and tree estimation still require some manual intervention reducing throughput and increasing analysis time for large chloroplast systematics projects. Verdant is a web-based software suite and database built to take advantage a novel annotation program, annoBTD. Using annoBTD, Verdant provides accurate annotation of chloroplast genomes without manual intervention. Subsequent alignment and tree estimation can incorporate newly annotated and publically available plastomes and can accommodate a large number of taxa. Verdant sharply reduces the time required for analysis of assembled chloroplast genomes and removes the need for pipelines and software on personal hardware. Verdant is available at: http://verdant.iplantcollaborative.org/plastidDB/ It is implemented in PHP, Perl, MySQL, Javascript, HTML and CSS with all major browsers supported. mrmckain@gmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  1. A Web Terminology Server Using UMLS for the Description of Medical Procedures

    PubMed Central

    Burgun, Anita; Denier, Patrick; Bodenreider, Olivier; Botti, Geneviève; Delamarre, Denis; Pouliquen, Bruno; Oberlin, Philippe; Lévéque, Jean M.; Lukacs, Bertrand; Kohler, François; Fieschi, Marius; Le Beux, Pierre

    1997-01-01

    Abstract The Model for Assistance in the Orientation of a User within Coding Systems (MAOUSSC) project has been designed to provide a representation for medical and surgical procedures that allows several applications to be developed from several viewpoints. It is based on a conceptual model, a controlled set of terms, and Web server development. The design includes the UMLS knowledge sources associated with additional knowledge about medico-surgical procedures. The model was implemented using a relational database. The authors developed a complete interface for the Web presentation, with the intermediary layer being written in PERL. The server has been used for the representation of medico-surgical procedures that occur in the discharge summaries of the national survey of hospital activities that is performed by the French Health Statistics Agency in order to produce inpatient profiles. The authors describe the current status of the MAOUSSC server and discuss their interest in using such a server to assist in the coordination of terminology tasks and in the sharing of controlled terminologies. PMID:9292841

  2. Production Management System for AMS Computing Centres

    NASA Astrophysics Data System (ADS)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  3. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    PubMed

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  4. Assessment of morphological and functional changes in organs of rats after intramuscular introduction of iron nanoparticles and their agglomerates.

    PubMed

    Sizova, Elena; Miroshnikov, Sergey; Yausheva, Elena; Polyakova, Valentina

    2015-01-01

    The research was performed on male Wistar rats based on assumptions that new microelement preparations containing metal nanoparticles and their agglomerates had potential. Morphological and functional changes in tissues in the injection site and dynamics of chemical element metabolism (25 indicators) in body were assessed after repeated intramuscular injections (total, 7) with preparation containing agglomerate of iron nanoparticles. As a result, iron depot was formed in myosymplasts of injection sites. The quantity of muscle fibers having positive Perls' stain increased with increasing number of injections. However, the concentration of the most chemical elements and iron significantly decreased in the whole skeletal muscle system (injection sites are not included). Consequently, it increased up to the control level after the sixth and the seventh injections. Among the studied organs (liver, kidneys, and spleen), Caspase-3 expression was revealed only in spleen. The expression had a direct dependence on the number of injections. Processes of iron elimination from preparation containing nanoparticles and their agglomerates had different intensity.

  5. Relapsed chronic lymphocytic leukemia retreated with rituximab: interim results of the PERLE study.

    PubMed

    Chaoui, Driss; Choquet, Sylvain; Sanhes, Laurence; Mahé, Béatrice; Hacini, Maya; Fitoussi, Olivier; Arkam, Yazid; Orfeuvre, Hubert; Dilhuydy, Marie-Sarah; Barry, Marly; Jourdan, Eric; Dreyfus, Brigitte; Tempescul, Adrian; Leprêtre, Stéphane; Bardet, Aurélie; Leconte, Pierre; Maynadié, Marc; Delmer, Alain

    2017-06-01

    This prospective non-interventional study assessed the management of relapsed/refractory CLL after one or two treatments with rituximab, and retreatment with a rituximab-based regimen. An interim analysis was performed at the end of the induction period in 192 evaluable patients. Median age was 72 years [35-89], first relapse (55%), and second relapse (45%). Rituximab administered during first (68%), second (92%), or both treatment lines (20%). R-bendamustine administered in 56% of patients, R-purine analogs (21%), and R-alkylating agents (19%). The overall response rate (ORR) was 74.6%, in favor of R-purine analogs (90%), R-bendamustine (75%), and R-alkylating agents (69%). Lower ORR in Del 17p patients (43%) and third time rituximab (31%). Most frequent adverse events were hematological (23% patients) including neutropenia (11%) and infections (12%); grade 3/4 AEs (23% patients), mainly hematological (18%); death during induction treatment (7%). This first large study focusing on relapsed/refractory CLL patients retreated with rituximab-based regimens is still ongoing.

  6. SeqDepot: streamlined database of biological sequences and precomputed features.

    PubMed

    Ulrich, Luke E; Zhulin, Igor B

    2014-01-15

    Assembling and/or producing integrated knowledge of sequence features continues to be an onerous and redundant task despite a large number of existing resources. We have developed SeqDepot-a novel database that focuses solely on two primary goals: (i) assimilating known primary sequences with predicted feature data and (ii) providing the most simple and straightforward means to procure and readily use this information. Access to >28.5 million sequences and 300 million features is provided through a well-documented and flexible RESTful interface that supports fetching specific data subsets, bulk queries, visualization and searching by MD5 digests or external database identifiers. We have also developed an HTML5/JavaScript web application exemplifying how to interact with SeqDepot and Perl/Python scripts for use with local processing pipelines. Freely available on the web at http://seqdepot.net/. RESTaccess via http://seqdepot.net/api/v1. Database files and scripts maybe downloaded from http://seqdepot.net/download.

  7. Improvement of web-based data acquisition and management system for GOSAT validation lidar data analysis

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra Nugraha; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei

    2013-01-01

    A web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data-analysis has been developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS (Automated Meteorological Data Acquisition System) ground-level local meteorological data, GPS Radiosonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data. In this article, we briefly describe some improvement for higher performance and higher data usability. GPS Radiosonde upper-air meteorological data and U.S. standard atmospheric model in DAS automatically calculate molecule number density profiles. Predicted ozone density prole images above Saga city are also calculated by using Meteorological Research Institute (MRI) chemistry-climate model version 2 for comparison to actual ozone DIAL data.

  8. IMCAT: Image and Catalogue Manipulation Software

    NASA Astrophysics Data System (ADS)

    Kaiser, Nick

    2011-08-01

    The IMCAT software was developed initially to do faint galaxy photometry for weak lensing studies, and provides a fairly complete set of tools for this kind of work. Unlike most packages for doing data analysis, the tools are standalone unix commands which you can invoke from the shell, via shell scripts or from perl scripts. The tools are arranges in a tree of directories. One main branch is the ’imtools’. These deal only with fits files. The most important imtool is the ’image calculator’ ’ic’ which allows one to do rather general operations on fits images. A second branch is the ’catools’ which operate only on catalogues. The key cattool is ’lc’; this effectively defines the format of IMCAT catalogues, and allows one to do very general operations on and filtering of such catalogues. A third branch is the ’imcattools’. These tend to be much more specialised than the cattools and imcattools and are focussed on faint galaxy photometry.

  9. Ultrafast laser direct hard-mask writing for high efficiency c-Si texture designs

    NASA Astrophysics Data System (ADS)

    Kumar, Kitty; Lee, Kenneth K. C.; Nogami, Jun; Herman, Peter R.; Kherani, Nazir P.

    2013-03-01

    This study reports a high-resolution hard-mask laser writing technique to facilitate the selective etching of crystalline silicon (c-Si) into an inverted-pyramidal texture with feature size and periodicity on the order of the wavelength which, thus, provides for both anti-reflection and effective light-trapping of infrared and visible light. The process also enables engineered positional placement of the inverted-pyramid thereby providing another parameter for optimal design of an optically efficient pattern. The proposed technique, a non-cleanroom process, is scalable for large area micro-fabrication of high-efficiency thin c-Si photovoltaics. Optical wave simulations suggest the fabricated textured surface with 1.3 μm inverted-pyramids and a single anti-reflective coating increases the relative energy conversion efficiency by 11% compared to the PERL-cell texture with 9 μm inverted pyramids on a 400 μm thick wafer. This efficiency gain is anticipated to improve further for thinner wafers due to enhanced diffractive light trapping effects.

  10. A Reliability Comparison of Classical and Stochastic Thickness Margin Approaches to Address Material Property Uncertainties for the Orion Heat Shield

    NASA Technical Reports Server (NTRS)

    Sepka, Steve; Vander Kam, Jeremy; McGuire, Kathy

    2018-01-01

    The Orion Thermal Protection System (TPS) margin process uses a root-sum-square approach with branches addressing trajectory, aerothermodynamics, and material response uncertainties in ablator thickness design. The material response branch applies a bond line temperature reduction between the Avcoat ablator and EA9394 adhesive by 60 C (108 F) from its peak allowed value of 260 C (500 F). This process is known as the Bond Line Temperature Material Margin (BTMM) and is intended to cover material property and performance uncertainties. The value of 60 C (108 F) is a constant, applied at any spacecraft body location and for any trajectory. By varying only material properties in a random (monte carlo) manner, the perl-based script mcCHAR is used to investigate the confidence interval provided by the BTMM. In particular, this study will look at various locations on the Orion heat shield forebody for a guided and an abort (ballistic) trajectory.

  11. Wing Classification in the Virtual Research Center

    NASA Technical Reports Server (NTRS)

    Campbell, William H.

    1999-01-01

    The Virtual Research Center (VRC) is a Web site that hosts a database of documents organized to allow teams of scientists and engineers to store and maintain documents. A number of other workgroup-related capabilities are provided. My tasks as a NASA/ASEE Summer Faculty Fellow included developing a scheme for classifying the workgroups using the VRC using the various Divisions within NASA Enterprises. To this end I developed a plan to use several CGI Perl scripts to gather classification information from the leaders of the workgroups, and to display all the workgroups within a specified classification. I designed, implemented, and partially tested scripts which can be used to do the classification. I was also asked to consider directions for future development of the VRC. I think that the VRC can use XML to advantage. XML is a markup language with designer tags that can be used to build meaning into documents. An investigation as to how CORBA, an object-oriented object request broker included with JDK 1.2, might be used also seems justified.

  12. User Manual for Whisper-1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-01-26

    Whisper is a statistical analysis package developed in 2014 to support nuclear criticality safety (NCS) validation [1-3]. It uses the sensitivity profile data for an application as computed by MCNP6 [4-6] along with covariance files [7,8] for the nuclear data to determine a baseline upper-subcritical-limit (USL) for the application. Whisper version 1.0 was first developed and used at LANL in 2014 [3]. During 2015- 2016, Whisper was updated to version 1.1 and is to be included with the upcoming release of MCNP6.2. This document describes the user input and options for running whisper-1.1, including 2 perl utility scripts that simplifymore » ordinary NCS work, whisper_mcnp.pl and whisper_usl.pl. For many detailed references on the theory, applications, nuclear data & covariances, SQA, verification-validation, adjointbased methods for sensitivity-uncertainty analysis, and more – see the Whisper – NCS Validation section of the MCNP Reference Collection at mcnp.lanl.gov. There are currently over 50 Whisper reference documents available.« less

  13. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  14. Mercury Shopping Cart Interface

    NASA Technical Reports Server (NTRS)

    Pfister, Robin; McMahon, Joe

    2006-01-01

    Mercury Shopping Cart Interface (MSCI) is a reusable component of the Power User Interface 5.0 (PUI) program described in another article. MSCI is a means of encapsulating the logic and information needed to describe an orderable item consistent with Mercury Shopping Cart service protocol. Designed to be used with Web-browser software, MSCI generates Hypertext Markup Language (HTML) pages on which ordering information can be entered. MSCI comprises two types of Practical Extraction and Report Language (PERL) modules: template modules and shopping-cart logic modules. Template modules generate HTML pages for entering the required ordering details and enable submission of the order via a Hypertext Transfer Protocol (HTTP) post. Shopping cart modules encapsulate the logic and data needed to describe an individual orderable item to the Mercury Shopping Cart service. These modules evaluate information entered by the user to determine whether it is sufficient for the Shopping Cart service to process the order. Once an order has been passed from MSCI to a deployed Mercury Shopping Cart server, there is no further interaction with the user.

  15. SSRscanner: a program for reporting distribution and exact location of simple sequence repeats.

    PubMed

    Anwar, Tamanna; Khan, Asad U

    2006-02-20

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. These repeated DNA sequences are found in both prokaryotes and eukaryotes. They are distributed almost at random throughout the genome, ranging from mononucleotide to trinucleotide repeats. They are also found at longer lengths (> 6 repeating units) of tracts. Most of the computer programs that find SSRs do not report its exact position. A computer program SSRscanner was written to find out distribution, frequency and exact location of each SSR in the genome. SSRscanner is user friendly. It can search repeats of any length and produce outputs with their exact position on chromosome and their frequency of occurrence in the sequence. This program has been written in PERL and is freely available for non-commercial users by request from the authors. Please contact the authors by E-mail: huzzi99@hotmail.com.

  16. EVALLER: a web server for in silico assessment of potential protein allergenicity

    PubMed Central

    Barrio, Alvaro Martinez; Soeria-Atmadja, Daniel; Nistér, Anders; Gustafsson, Mats G.; Hammerling, Ulf; Bongcam-Rudloff, Erik

    2007-01-01

    Bioinformatics testing approaches for protein allergenicity, involving amino acid sequence comparisons, have evolved appreciably over the last several years to increased sophistication and performance. EVALLER, the web server presented in this article is based on our recently published ‘Detection based on Filtered Length-adjusted Allergen Peptides’ (DFLAP) algorithm, which affords in silico determination of potential protein allergenicity of high sensitivity and excellent specificity. To strengthen bioinformatics risk assessment in allergology EVALLER provides a comprehensive outline of its judgment on a query protein's potential allergenicity. Each such textual output incorporates a scoring figure, a confidence numeral of the assignment and information on high- or low-scoring matches to identified allergen-related motifs, including their respective location in accordingly derived allergens. The interface, built on a modified Perl Open Source package, enables dynamic and color-coded graphic representation of key parts of the output. Moreover, pertinent details can be examined in great detail through zoomed views. The server can be accessed at http://bioinformatics.bmc.uu.se/evaller.html. PMID:17537818

  17. A Reliability Comparison of Classical and Stochastic Thickness Margin Approaches to Address Material Property Uncertainties for the Orion Heat Shield

    NASA Technical Reports Server (NTRS)

    Sepka, Steven A.; McGuire, Mary Kathleen; Vander Kam, Jeremy C.

    2018-01-01

    The Orion Thermal Protection System (TPS) margin process uses a root-sum-square approach with branches addressing trajectory, aerothermodynamics, and material response uncertainties in ablator thickness design. The material response branch applies a bondline temperature reduction between the Avcoat ablator and EA9394 adhesive by 60 C (108 F) from its peak allowed value of 260 C (500 F). This process is known as the Bond Line Temperature Material Margin (BTMM) and is intended to cover material property and performance uncertainties. The value of 60 C (108 F) is a constant, applied at any spacecraft body location and for any trajectory. By varying only material properties in a random (monte carlo) manner, the perl-based script mcCHAR is used to investigate the confidence interval provided by the BTMM. In particular, this study will look at various locations on the Orion heat shield forebody for a guided and an abort (ballistic) trajectory.

  18. MAGI: a Node.js web service for fast microRNA-Seq analysis in a GPU infrastructure.

    PubMed

    Kim, Jihoon; Levy, Eric; Ferbrache, Alex; Stepanowsky, Petra; Farcas, Claudiu; Wang, Shuang; Brunner, Stefan; Bath, Tyler; Wu, Yuan; Ohno-Machado, Lucila

    2014-10-01

    MAGI is a web service for fast MicroRNA-Seq data analysis in a graphics processing unit (GPU) infrastructure. Using just a browser, users have access to results as web reports in just a few hours->600% end-to-end performance improvement over state of the art. MAGI's salient features are (i) transfer of large input files in native FASTA with Qualities (FASTQ) format through drag-and-drop operations, (ii) rapid prediction of microRNA target genes leveraging parallel computing with GPU devices, (iii) all-in-one analytics with novel feature extraction, statistical test for differential expression and diagnostic plot generation for quality control and (iv) interactive visualization and exploration of results in web reports that are readily available for publication. MAGI relies on the Node.js JavaScript framework, along with NVIDIA CUDA C, PHP: Hypertext Preprocessor (PHP), Perl and R. It is freely available at http://magi.ucsd.edu. © The Author 2014. Published by Oxford University Press.

  19. Evolving Strategies for the Incorporation of Bioinformatics Within the Undergraduate Cell Biology Curriculum

    PubMed Central

    Honts, Jerry E.

    2003-01-01

    Recent advances in genomics and structural biology have resulted in an unprecedented increase in biological data available from Internet-accessible databases. In order to help students effectively use this vast repository of information, undergraduate biology students at Drake University were introduced to bioinformatics software and databases in three courses, beginning with an introductory course in cell biology. The exercises and projects that were used to help students develop literacy in bioinformatics are described. In a recently offered course in bioinformatics, students developed their own simple sequence analysis tool using the Perl programming language. These experiences are described from the point of view of the instructor as well as the students. A preliminary assessment has been made of the degree to which students had developed a working knowledge of bioinformatics concepts and methods. Finally, some conclusions have been drawn from these courses that may be helpful to instructors wishing to introduce bioinformatics within the undergraduate biology curriculum. PMID:14673489

  20. An in silico pipeline to filter the Toxoplasma gondii proteome for proteins that could traffic to the host cell nucleus and influence host cell epigenetic regulation.

    PubMed

    Syn, Genevieve; Blackwell, Jenefer M; Jamieson, Sarra E; Francis, Richard W

    2018-01-01

    Toxoplasma gondii uses epigenetic mechanisms to regulate both endogenous and host cell gene expression. To identify genes with putative epigenetic functions, we developed an in silico pipeline to interrogate the T. gondii proteome of 8313 proteins. Step 1 employs PredictNLS and NucPred to identify genes predicted to target eukaryotic nuclei. Step 2 uses GOLink to identify proteins of epigenetic function based on Gene Ontology terms. This resulted in 611 putative nuclear localised proteins with predicted epigenetic functions. Step 3 filtered for secretory proteins using SignalP, SecretomeP, and experimental data. This identified 57 of the 611 putative epigenetic proteins as likely to be secreted. The pipeline is freely available online, uses open access tools and software with user-friendly Perl scripts to automate and manage the results, and is readily adaptable to undertake any such in silico search for genes contributing to particular functions.

  1. Simple sequence repeat marker loci discovery using SSR primer.

    PubMed

    Robinson, Andrew J; Love, Christopher G; Batley, Jacqueline; Barker, Gary; Edwards, David

    2004-06-12

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. With the increase in the availability of DNA sequence information, an automated process to identify and design PCR primers for amplification of SSR loci would be a useful tool in plant breeding programs. We report an application that integrates SPUTNIK, an SSR repeat finder, with Primer3, a PCR primer design program, into one pipeline tool, SSR Primer. On submission of multiple FASTA formatted sequences, the script screens each sequence for SSRs using SPUTNIK. The results are parsed to Primer3 for locus-specific primer design. The script makes use of a Web-based interface, enabling remote use. This program has been written in PERL and is freely available for non-commercial users by request from the authors. The Web-based version may be accessed at http://hornbill.cspp.latrobe.edu.au/

  2. CERES AuTomAted job Loading SYSTem (CATALYST): An automated workflow manager for satellite data production

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Hillyer, T. N.; Wilkins, J.

    2012-12-01

    The CERES Science Team integrates data from 5 CERES instruments onboard the Terra, Aqua and NPP missions. The processing chain fuses CERES observations with data from 19 other unique sources. The addition of CERES Flight Model 5 (FM5) onboard NPP, coupled with ground processing system upgrades further emphasizes the need for an automated job-submission utility to manage multiple processing streams concurrently. The operator-driven, legacy-processing approach relied on manually staging data from magnetic tape to limited spinning disk attached to a shared memory architecture system. The migration of CERES production code to a distributed, cluster computing environment with approximately one petabyte of spinning disk containing all precursor input data products facilitates the development of a CERES-specific, automated workflow manager. In the cluster environment, I/O is the primary system resource in contention across jobs. Therefore, system load can be maximized with a throttling workload manager. This poster discusses a Java and Perl implementation of an automated job management tool tailored for CERES processing.

  3. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  4. TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information

    PubMed Central

    Struck, Torsten H

    2014-01-01

    Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118

  5. Diagnosis of electrocution: The application of scanning electron microscope and energy-dispersive X-ray spectroscopy in five cases.

    PubMed

    Visonà, S D; Chen, Y; Bernardi, P; Andrello, L; Osculati, A

    2018-03-01

    Deaths from electricity, generally, do not have specific findings at the autopsy. The diagnosis is commonly based on the circumstances of the death and the morphologic findings, above all the current mark. Yet, the skin injury due to an electrocution and other kinds of thermal injuries often cannot be differentiated with certainty. Therefore, there is a great interest in finding specific markers of electrocution. The search for the metallization of the skin through Scanning Electron Microscope equipped with Energy Dispersive X-Ray Spectroscopy (EDS) probe is of special importance in order to achieve a definite diagnosis in case of suspected electrocution. We selected five cases in which the electrocution was extremely likely considering the circumstances of the death. In each case a forensic autopsy was performed. Then, the skin specimens were stained with Hematoxylin Eosin and Perls. On the other hand, the skin lesions were examined with a scanning electron microscope equipped with EDS probe in order to evaluate the morphological ultrastructural features and the presence of deposits on the surface of the skin. The typical skin injury of the electrocution (current mark) were macroscopically detected in all of the cases. The microscopic examination of the skin lesions revealed the typical spherical vacuoles in the horny layer and, in the epidermis, the elongation of the cell nuclei as well as necrosis. Perls staining was negative in 4 out 6 cases. Ultrastructural morphology revealed the evident vacuolization of the horny layer, elongation of epidermic cells, coagulation of the elastic fibers. In the specimens collected from the site of contact with the conductor of case 1 and 2, the presence of the Kα peaks of iron was detected. In the corresponding specimens taken from cases 2, 4, 5 the microanalysis showed the Kα peaks of titanium. In case 3, titanium and carbon were found. In the suspicion of electrocution, the integrated use of different tools is recommended, including macroscopic observation, H&E staining, iron-specific staining, scanning electron microscopy and EDS microanalysis. Only the careful interpretation of the results provided by all these methods can allow the pathologist to correctly identify the cause of the death. Particularly, the present study suggests that the microanalysis (SEM-EDS) represents a very useful tool for the diagnosis of electrocution, allowing the detection and the identification of the metals embedded in the skin and their evaluation in the context of the ultrastructural morphology. Copyright © 2018. Published by Elsevier B.V.

  6. Programmatic access to data and information at the IRIS DMC via web services

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.

    2011-12-01

    The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned data. By using this library a developer will not need to learn the details of the service interfaces or understand the data formats returned. This library will be used to build the software bridge needed to request data and information from within MATLAB°. We also provide several client scripts written in Perl for the retrieval of waveform data, metadata and earthquake catalogs using command line programs. For more information on the DMC's web services please visit http://www.iris.edu/ws/

  7. Tracking iron in multiple sclerosis: a combined imaging and histopathological study at 7 Tesla

    PubMed Central

    Hametner, Simon; Yao, Bing; van Gelderen, Peter; Merkle, Hellmut; Cantor, Fredric K.; Lassmann, Hans; Duyn, Jeff H.

    2011-01-01

    Previous authors have shown that the transverse relaxivity R2* and frequency shifts that characterize gradient echo signal decay in magnetic resonance imaging are closely associated with the distribution of iron and myelin in the brain's white matter. In multiple sclerosis, iron accumulation in brain tissue may reflect a multiplicity of pathological processes. Hence, iron may have the unique potential to serve as an in vivo magnetic resonance imaging tracer of disease pathology. To investigate the ability of iron in tracking multiple sclerosis-induced pathology by magnetic resonance imaging, we performed qualitative histopathological analysis of white matter lesions and normal-appearing white matter regions with variable appearance on gradient echo magnetic resonance imaging at 7 Tesla. The samples used for this study derive from two patients with multiple sclerosis and one non-multiple sclerosis donor. Magnetic resonance images were acquired using a whole body 7 Tesla magnetic resonance imaging scanner equipped with a 24-channel receive-only array designed for tissue imaging. A 3D multi-gradient echo sequence was obtained and quantitative R2* and phase maps were reconstructed. Immunohistochemical stainings for myelin and oligodendrocytes, microglia and macrophages, ferritin and ferritin light polypeptide were performed on 3- to 5-µm thick paraffin sections. Iron was detected with Perl's staining and 3,3′-diaminobenzidine-tetrahydrochloride enhanced Turnbull blue staining. In multiple sclerosis tissue, iron presence invariably matched with an increase in R2*. Conversely, R2* increase was not always associated with the presence of iron on histochemical staining. We interpret this finding as the effect of embedding, sectioning and staining procedures. These processes likely affected the histopathological analysis results but not the magnetic resonance imaging that was obtained before tissue manipulations. Several cellular sources of iron were identified. These sources included oligodendrocytes in normal-appearing white matter and activated macrophages/microglia at the edges of white matter lesions. Additionally, in white matter lesions, iron precipitation in aggregates typical of microbleeds was shown by the Perl's staining. Our combined imaging and pathological study shows that multi-gradient echo magnetic resonance imaging is a sensitive technique for the identification of iron in the brain tissue of patients with multiple sclerosis. However, magnetic resonance imaging-identified iron does not necessarily reflect pathology and may also be seen in apparently normal tissue. Iron identification by multi-gradient echo magnetic resonance imaging in diseased tissues can shed light on the pathological processes when coupled with topographical information and patient disease history. PMID:22171355

  8. Investigation of the possible connection of rock and soil geochemistry to the occurrence of high rates of neurodegenerative diseases on Guam and a hypothesis for the cause of the diseases

    USGS Publications Warehouse

    Miller, William R.; Sanzolone, Richard F.

    2003-01-01

    High incidences of neurodegenerative diseases, mainly dementia, parkinsonism, and amyotrophic lateral sclerosis, occur on the island of Guam (Koerner, 1952; Kurland and Mulder, 1954). The occurrence and description of the diseases and a summary of the investigations can be found in Perl (1997). The diseases have been more prevalent along the southern coast, particularly the small villages of Umatac, Merizo, and Inarajan (Reed and Brody, 1975; Roman, 1996; and Perl, 1997) (fig. 1), and referred to as the southern villages in this report. Tertiary volcanic rocks underlie most of the southern part of the island, including these villages. The northern part of Guam, with lower incidences of the diseases, consists of carbonate rocks. Epidemiological studies beginning in the early 1950’s failed to show the cause to be genetic etiology (Plato and others, 1986; Zhang and others, 1990). In recent studies, the search for pathogenic mechanisms has shifted to environmental factors. Excesses or deficiencies of various elements from dietary sources including drinking water can have an effect on human health. These deficiencies or excesses can usually be attributed to the geochemical composition of the rocks and derived soils that underlie the area. An example is the high concentration of Se in soil associated with the occurrence of selenosis in adults (Mills, 1996). Yase (1972) suggested that the neurodegenerative diseases on Guam may be related to accumulation of trace elements such as manganese and aluminum, both of which may cause neurodegeneration. It has been suggested that a deficiency in calcium and magnesium in the soil and water along with readily available aluminum could be connected to the occurrence of the diseases (Gajdusek, 1982; Yanagihara and others, 1984; Garruto and others, 1989). Some of the studies investigated metal exposure, particularly aluminum and manganese, and deficiencies in calcium and magnesium (Garruto and others, 1984). Aluminum has been shown to have neurotoxic effects (MacDonald and Martin, 1988), and aluminum has been implicated in the pathogenesis of Alzheimer’s disease and similar dementia by Perl and others (1982). Studies of soils developed on volcanic rocks on Guam and other islands by McLachlan and others (1989) found that soils on Guam averaged 42-fold higher yield of elutable aluminum than soils developed on volcanic rocks on Jamaica or Palau. They did not detect unusually high dietary aluminum or low dietary calcium, but concluded that the soils and possibly the dusts of Guam might be a major source of aluminum entering the body of the inhabitants. This study was conducted to investigate the geochemistry of the soils and rocks of the volcanic southern part of the island of Guam, particularly in the vicinity of the three southern villages (Umatac, Merizo, and Inarajan) with high incidences of the diseases. In addition to total chemical analyses of the soils and rocks, various extractions of soils were carried out. Both excesses and deficiencies of various elements were looked for. Because soluble aluminum in the soil was shown by McLachlan and others (1989) to be unusually high, water-soluble extractions as well as sequential extractions of the soils were carried out. In addition, elements such as aluminum found in dust can traverse the nose-brain barrier in experimental animals (Sunderman, 2000) and respiratory epithelium is known to contain the highest concentration of aluminum in the human body (Tipton and others, 1957). The availability of elements, particularly aluminum from human inhalation of dust, derived from soil, was investigated. The available elements were determined by extractions of soils using a simulated lung-fluid extraction.In order to compare the results of the chemical data of rocks and soils from Guam to other rocks and soils elsewhere, samples of similar rocks and soils were collected in the western United States and similar analyses to those for the Guam samples carried out. The complete chemical analyses of the soils, rocks, and streambed sediments as well as descriptions of the methods used can be found in Miller and others (2002).

  9. SINE_scan: an efficient tool to discover short interspersed nuclear elements (SINEs) in large-scale genomic datasets.

    PubMed

    Mao, Hongliang; Wang, Hao

    2017-03-01

    Short Interspersed Nuclear Elements (SINEs) are transposable elements (TEs) that amplify through a copy-and-paste mode via RNA intermediates. The computational identification of new SINEs are challenging because of their weak structural signals and rapid diversification in sequences. Here we report SINE_Scan, a highly efficient program to predict SINE elements in genomic DNA sequences. SINE_Scan integrates hallmark of SINE transposition, copy number and structural signals to identify a SINE element. SINE_Scan outperforms the previously published de novo SINE discovery program. It shows high sensitivity and specificity in 19 plant and animal genome assemblies, of which sizes vary from 120 Mb to 3.5 Gb. It identifies numerous new families and substantially increases the estimation of the abundance of SINEs in these genomes. The code of SINE_Scan is freely available at http://github.com/maohlzj/SINE_Scan , implemented in PERL and supported on Linux. wangh8@fudan.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  10. SINE_scan: an efficient tool to discover short interspersed nuclear elements (SINEs) in large-scale genomic datasets

    PubMed Central

    Mao, Hongliang

    2017-01-01

    Abstract Motivation: Short Interspersed Nuclear Elements (SINEs) are transposable elements (TEs) that amplify through a copy-and-paste mode via RNA intermediates. The computational identification of new SINEs are challenging because of their weak structural signals and rapid diversification in sequences. Results: Here we report SINE_Scan, a highly efficient program to predict SINE elements in genomic DNA sequences. SINE_Scan integrates hallmark of SINE transposition, copy number and structural signals to identify a SINE element. SINE_Scan outperforms the previously published de novo SINE discovery program. It shows high sensitivity and specificity in 19 plant and animal genome assemblies, of which sizes vary from 120 Mb to 3.5 Gb. It identifies numerous new families and substantially increases the estimation of the abundance of SINEs in these genomes. Availability and Implementation: The code of SINE_Scan is freely available at http://github.com/maohlzj/SINE_Scan, implemented in PERL and supported on Linux. Contact: wangh8@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062442

  11. ActionMap: A web-based software that automates loci assignments to framework maps.

    PubMed

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  12. ActionMap: a web-based software that automates loci assignments to framework maps

    PubMed Central

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-01-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/). PMID:12824426

  13. Forensic Analysis of Compromised Computers

    NASA Technical Reports Server (NTRS)

    Wolfe, Thomas

    2004-01-01

    Directory Tree Analysis File Generator is a Practical Extraction and Reporting Language (PERL) script that simplifies and automates the collection of information for forensic analysis of compromised computer systems. During such an analysis, it is sometimes necessary to collect and analyze information about files on a specific directory tree. Directory Tree Analysis File Generator collects information of this type (except information about directories) and writes it to a text file. In particular, the script asks the user for the root of the directory tree to be processed, the name of the output file, and the number of subtree levels to process. The script then processes the directory tree and puts out the aforementioned text file. The format of the text file is designed to enable the submission of the file as input to a spreadsheet program, wherein the forensic analysis is performed. The analysis usually consists of sorting files and examination of such characteristics of files as ownership, time of creation, and time of most recent access, all of which characteristics are among the data included in the text file.

  14. ASPeak: an abundance sensitive peak detection algorithm for RIP-Seq.

    PubMed

    Kucukural, Alper; Özadam, Hakan; Singh, Guramrit; Moore, Melissa J; Cenik, Can

    2013-10-01

    Unlike DNA, RNA abundances can vary over several orders of magnitude. Thus, identification of RNA-protein binding sites from high-throughput sequencing data presents unique challenges. Although peak identification in ChIP-Seq data has been extensively explored, there are few bioinformatics tools tailored for peak calling on analogous datasets for RNA-binding proteins. Here we describe ASPeak (abundance sensitive peak detection algorithm), an implementation of an algorithm that we previously applied to detect peaks in exon junction complex RNA immunoprecipitation in tandem experiments. Our peak detection algorithm yields stringent and robust target sets enabling sensitive motif finding and downstream functional analyses. ASPeak is implemented in Perl as a complete pipeline that takes bedGraph files as input. ASPeak implementation is freely available at https://sourceforge.net/projects/as-peak under the GNU General Public License. ASPeak can be run on a personal computer, yet is designed to be easily parallelizable. ASPeak can also run on high performance computing clusters providing efficient speedup. The documentation and user manual can be obtained from http://master.dl.sourceforge.net/project/as-peak/manual.pdf.

  15. HPV-QUEST: A highly customized system for automated HPV sequence analysis capable of processing Next Generation sequencing data set.

    PubMed

    Yin, Li; Yao, Jiqiang; Gardner, Brent P; Chang, Kaifen; Yu, Fahong; Goodenow, Maureen M

    2012-01-01

    Next Generation sequencing (NGS) applied to human papilloma viruses (HPV) can provide sensitive methods to investigate the molecular epidemiology of multiple type HPV infection. Currently a genotyping system with a comprehensive collection of updated HPV reference sequences and a capacity to handle NGS data sets is lacking. HPV-QUEST was developed as an automated and rapid HPV genotyping system. The web-based HPV-QUEST subtyping algorithm was developed using HTML, PHP, Perl scripting language, and MYSQL as the database backend. HPV-QUEST includes a database of annotated HPV reference sequences with updated nomenclature covering 5 genuses, 14 species and 150 mucosal and cutaneous types to genotype blasted query sequences. HPV-QUEST processes up to 10 megabases of sequences within 1 to 2 minutes. Results are reported in html, text and excel formats and display e-value, blast score, and local and coverage identities; provide genus, species, type, infection site and risk for the best matched reference HPV sequence; and produce results ready for additional analyses.

  16. ExportAid: database of RNA elements regulating nuclear RNA export in mammals.

    PubMed

    Giulietti, Matteo; Milantoni, Sara Armida; Armeni, Tatiana; Principato, Giovanni; Piva, Francesco

    2015-01-15

    Regulation of nuclear mRNA export or retention is carried out by RNA elements but the mechanism is not yet well understood. To understand the mRNA export process, it is important to collect all the involved RNA elements and their trans-acting factors. By hand-curated literature screening we collected, in ExportAid database, experimentally assessed data about RNA elements regulating nuclear export or retention of endogenous, heterologous or artificial RNAs in mammalian cells. This database could help to understand the RNA export language and to study the possible export efficiency alterations owing to mutations or polymorphisms. Currently, ExportAid stores 235 and 96 RNA elements, respectively, increasing and decreasing export efficiency, and 98 neutral assessed sequences. Freely accessible without registration at http://www.introni.it/ExportAid/ExportAid.html. Database and web interface are implemented in Perl, MySQL, Apache and JavaScript with all major browsers supported. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Assessment of Morphological and Functional Changes in Organs of Rats after Intramuscular Introduction of Iron Nanoparticles and Their Agglomerates

    PubMed Central

    Sizova, Elena; Miroshnikov, Sergey; Yausheva, Elena; Polyakova, Valentina

    2015-01-01

    The research was performed on male Wistar rats based on assumptions that new microelement preparations containing metal nanoparticles and their agglomerates had potential. Morphological and functional changes in tissues in the injection site and dynamics of chemical element metabolism (25 indicators) in body were assessed after repeated intramuscular injections (total, 7) with preparation containing agglomerate of iron nanoparticles. As a result, iron depot was formed in myosymplasts of injection sites. The quantity of muscle fibers having positive Perls' stain increased with increasing number of injections. However, the concentration of the most chemical elements and iron significantly decreased in the whole skeletal muscle system (injection sites are not included). Consequently, it increased up to the control level after the sixth and the seventh injections. Among the studied organs (liver, kidneys, and spleen), Caspase-3 expression was revealed only in spleen. The expression had a direct dependence on the number of injections. Processes of iron elimination from preparation containing nanoparticles and their agglomerates had different intensity. PMID:25789310

  18. compendiumdb: an R package for retrieval and storage of functional genomics data.

    PubMed

    Nandal, Umesh K; van Kampen, Antoine H C; Moerland, Perry D

    2016-09-15

    Currently, the Gene Expression Omnibus (GEO) contains public data of over 1 million samples from more than 40 000 microarray-based functional genomics experiments. This provides a rich source of information for novel biological discoveries. However, unlocking this potential often requires retrieving and storing a large number of expression profiles from a wide range of different studies and platforms. The compendiumdb R package provides an environment for downloading functional genomics data from GEO, parsing the information into a local or remote database and interacting with the database using dedicated R functions, thus enabling seamless integration with other tools available in R/Bioconductor. The compendiumdb package is written in R, MySQL and Perl. Source code and binaries are available from CRAN (http://cran.r-project.org/web/packages/compendiumdb/) for all major platforms (Linux, MS Windows and OS X) under the GPLv3 license. p.d.moerland@amc.uva.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Simple proteomics data analysis in the object-oriented PowerShell.

    PubMed

    Mohammed, Yassene; Palmblad, Magnus

    2013-01-01

    Scripting languages such as Perl and Python are appreciated for solving simple, everyday tasks in bioinformatics. A more recent, object-oriented command shell and scripting language, Windows PowerShell, has many attractive features: an object-oriented interactive command line, fluent navigation and manipulation of XML files, ability to consume Web services from the command line, consistent syntax and grammar, rich regular expressions, and advanced output formatting. The key difference between classical command shells and scripting languages, such as bash, and object-oriented ones, such as PowerShell, is that in the latter the result of a command is a structured object with inherited properties and methods rather than a simple stream of characters. Conveniently, PowerShell is included in all new releases of Microsoft Windows and therefore already installed on most computers in classrooms and teaching labs. In this chapter we demonstrate how PowerShell in particular allows easy interaction with mass spectrometry data in XML formats, connection to Web services for tools such as BLAST, and presentation of results as formatted text or graphics. These features make PowerShell much more than "yet another scripting language."

  20. A compatible exon-exon junction database for the identification of exon skipping events using tandem mass spectrum data.

    PubMed

    Mo, Fan; Hong, Xu; Gao, Feng; Du, Lin; Wang, Jun; Omenn, Gilbert S; Lin, Biaoyang

    2008-12-16

    Alternative splicing is an important gene regulation mechanism. It is estimated that about 74% of multi-exon human genes have alternative splicing. High throughput tandem (MS/MS) mass spectrometry provides valuable information for rapidly identifying potentially novel alternatively-spliced protein products from experimental datasets. However, the ability to identify alternative splicing events through tandem mass spectrometry depends on the database against which the spectra are searched. We wrote scripts in perl, Bioperl, mysql and Ensembl API and built a theoretical exon-exon junction protein database to account for all possible combinations of exons for a gene while keeping the frame of translation (i.e., keeping only in-phase exon-exon combinations) from the Ensembl Core Database. Using our liver cancer MS/MS dataset, we identified a total of 488 non-redundant peptides that represent putative exon skipping events. Our exon-exon junction database provides the scientific community with an efficient means to identify novel alternatively spliced (exon skipping) protein isoforms using mass spectrometry data. This database will be useful in annotating genome structures using rapidly accumulating proteomics data.

  1. DB Dehydrogenase: an online integrated structural database on enzyme dehydrogenase.

    PubMed

    Nandy, Suman Kumar; Bhuyan, Rajabrata; Seal, Alpana

    2012-01-01

    Dehydrogenase enzymes are almost inevitable for metabolic processes. Shortage or malfunctioning of dehydrogenases often leads to several acute diseases like cancers, retinal diseases, diabetes mellitus, Alzheimer, hepatitis B & C etc. With advancement in modern-day research, huge amount of sequential, structural and functional data are generated everyday and widens the gap between structural attributes and its functional understanding. DB Dehydrogenase is an effort to relate the functionalities of dehydrogenase with its structures. It is a completely web-based structural database, covering almost all dehydrogenases [~150 enzyme classes, ~1200 entries from ~160 organisms] whose structures are known. It is created by extracting and integrating various online resources to provide the true and reliable data and implemented by MySQL relational database through user friendly web interfaces using CGI Perl. Flexible search options are there for data extraction and exploration. To summarize, sequence, structure, function of all dehydrogenases in one place along with the necessary option of cross-referencing; this database will be utile for researchers to carry out further work in this field. The database is available for free at http://www.bifku.in/DBD/

  2. ABM Drag_Pass Report Generator

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladden, Roy; Khanampornpan, Teerapat

    2008-01-01

    dragREPORT software was developed in parallel with abmREPORT, which is described in the preceding article. Both programs were built on the capabilities created during that process. This tool generates a drag_pass report that summarizes vital information from the MRO aerobreaking drag_pass build process to facilitate both sequence reviews and provide a high-level summarization of the sequence for mission management. The script extracts information from the ENV, SSF, FRF, SCMFmax, and OPTG files, presenting them in a single, easy-to-check report providing the majority of parameters needed for cross check and verification as part of the sequence review process. Prior to dragReport, all the needed information was spread across a number of different files, each in a different format. This software is a Perl script that extracts vital summarization information and build-process details from a number of source files into a single, concise report format used to aid the MPST sequence review process and to provide a high-level summarization of the sequence for mission management reference. This software could be adapted for future aerobraking missions to provide similar reports, review and summarization information.

  3. MOCASSIN-prot: a multi-objective clustering approach for protein similarity networks.

    PubMed

    Keel, Brittney N; Deng, Bo; Moriyama, Etsuko N

    2018-04-15

    Proteins often include multiple conserved domains. Various evolutionary events including duplication and loss of domains, domain shuffling, as well as sequence divergence contribute to generating complexities in protein structures, and consequently, in their functions. The evolutionary history of proteins is hence best modeled through networks that incorporate information both from the sequence divergence and the domain content. Here, a game-theoretic approach proposed for protein network construction is adapted into the framework of multi-objective optimization, and extended to incorporate clustering refinement procedure. The new method, MOCASSIN-prot, was applied to cluster multi-domain proteins from ten genomes. The performance of MOCASSIN-prot was compared against two protein clustering methods, Markov clustering (TRIBE-MCL) and spectral clustering (SCPS). We showed that compared to these two methods, MOCASSIN-prot, which uses both domain composition and quantitative sequence similarity information, generates fewer false positives. It achieves more functionally coherent protein clusters and better differentiates protein families. MOCASSIN-prot, implemented in Perl and Matlab, is freely available at http://bioinfolab.unl.edu/emlab/MOCASSINprot. emoriyama2@unl.edu. Supplementary data are available at Bioinformatics online.

  4. Assessment of the systemic distribution of a bioconjugated anti-Her2 magnetic nanoparticle in a breast cancer model by means of magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Huerta-Núñez, L. F. E.; Villanueva-Lopez, G. Cleva; Morales-Guadarrama, A.; Soto, S.; López, J.; Silva, J. G.; Perez-Vielma, N.; Sacristán, E.; Gudiño-Zayas, Marco E.; González, C. A.

    2016-09-01

    The aim of this study was to determine the systemic distribution of magnetic nanoparticles of 100 nm diameter (MNPs) coupled to a specific monoclonal antibody anti-Her2 in an experimental breast cancer (BC) model. The study was performed in two groups of Sprague-Dawley rats: control ( n = 6) and BC chemically induced ( n = 3). Bioconjugated "anti-Her2-MNPs" were intravenously administered, and magnetic resonance imaging (MRI) monitored its systemic distribution at seven times after administration. Non-heme iron presence associated with the location of the bioconjugated anti-Her2-MNPs in splenic, hepatic, cardiac and tumor tissues was detected by Perl's Prussian blue (PPB) stain. Optical density measurements were used to semiquantitatively determine the iron presence in tissues on the basis of a grayscale values integration of T1 and T2 MRI sequence images. The results indicated a delayed systemic distribution of MNPs in cancer compared to healthy conditions with a maximum concentration of MNPs in cancer tissue at 24 h post-infusion.

  5. IVS Working Group 4: VLBI Data Structures

    NASA Astrophysics Data System (ADS)

    Gipson, J.

    2012-12-01

    I present an overview of the "openDB format" for storing, archiving, and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including Linux, Windows, and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data. For example it allows you to easily change subsets of the data used in the analysis such as troposphere modeling, ionospheric calibration, editing, and ambiguity resolution. It also allows for extending the types of data used, e.g., source maps. I present a roadmap to transition to this new format. The new format can already be used by VieVS and by the global mode of solve. There are plans in work for other software packages to be able to use the new format.

  6. Production Maintenance Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason Gabler, David Skinner

    2005-11-01

    PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less

  7. R3D: Reduction Package for Integral Field Spectroscopy

    NASA Astrophysics Data System (ADS)

    Sánchez, Sebastián. F.

    2011-06-01

    R3D was developed to reduce fiber-based integral field spectroscopy (IFS) data. The package comprises a set of command-line routines adapted for each of these steps, suitable for creating pipelines. The routines have been tested against simulations, and against real data from various integral field spectrographs (PMAS, PPAK, GMOS, VIMOS and INTEGRAL). Particular attention is paid to the treatment of cross-talk. R3D unifies the reduction techniques for the different IFS instruments to a single one, in order to allow the general public to reduce different instruments data in an homogeneus, consistent and simple way. Although still in its prototyping phase, it has been proved to be useful to reduce PMAS (both in the Larr and the PPAK modes), VIMOS and INTEGRAL data. The current version has been coded in Perl, using PDL, in order to speed-up the algorithm testing phase. Most of the time critical algorithms have been translated to C[float=][/float], and it is our intention to translate all of them. However, even in this phase R3D is fast enough to produce valuable science frames in reasonable time.

  8. SSRscanner: a program for reporting distribution and exact location of simple sequence repeats

    PubMed Central

    Anwar, Tamanna; Khan, Asad U

    2006-01-01

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. These repeated DNA sequences are found in both prokaryotes and eukaryotes. They are distributed almost at random throughout the genome, ranging from mononucleotide to trinucleotide repeats. They are also found at longer lengths (> 6 repeating units) of tracts. Most of the computer programs that find SSRs do not report its exact position. A computer program SSRscanner was written to find out distribution, frequency and exact location of each SSR in the genome. SSRscanner is user friendly. It can search repeats of any length and produce outputs with their exact position on chromosome and their frequency of occurrence in the sequence. Availability This program has been written in PERL and is freely available for non-commercial users by request from the authors. Please contact the authors by E-mail: huzzi99@hotmail.com PMID:17597863

  9. Ensembl regulation resources

    PubMed Central

    Zerbino, Daniel R.; Johnson, Nathan; Juetteman, Thomas; Sheppard, Dan; Wilder, Steven P.; Lavidas, Ilias; Nuhn, Michael; Perry, Emily; Raffaillac-Desfosses, Quentin; Sobral, Daniel; Keefe, Damian; Gräf, Stefan; Ahmed, Ikhlak; Kinsella, Rhoda; Pritchard, Bethan; Brent, Simon; Amode, Ridwan; Parker, Anne; Trevanion, Steven; Birney, Ewan; Dunham, Ian; Flicek, Paul

    2016-01-01

    New experimental techniques in epigenomics allow researchers to assay a diversity of highly dynamic features such as histone marks, DNA modifications or chromatin structure. The study of their fluctuations should provide insights into gene expression regulation, cell differentiation and disease. The Ensembl project collects and maintains the Ensembl regulation data resources on epigenetic marks, transcription factor binding and DNA methylation for human and mouse, as well as microarray probe mappings and annotations for a variety of chordate genomes. From this data, we produce a functional annotation of the regulatory elements along the human and mouse genomes with plans to expand to other species as data becomes available. Starting from well-studied cell lines, we will progressively expand our library of measurements to a greater variety of samples. Ensembl’s regulation resources provide a central and easy-to-query repository for reference epigenomes. As with all Ensembl data, it is freely available at http://www.ensembl.org, from the Perl and REST APIs and from the public Ensembl MySQL database server at ensembldb.ensembl.org. Database URL: http://www.ensembl.org PMID:26888907

  10. Internet Distribution of Spacecraft Telemetry Data

    NASA Technical Reports Server (NTRS)

    Specht, Ted; Noble, David

    2006-01-01

    Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.

  11. Extension of the COG and arCOG databases by amino acid and nucleotide sequences

    PubMed Central

    Meereis, Florian; Kaufmann, Michael

    2008-01-01

    Background The current versions of the COG and arCOG databases, both excellent frameworks for studies in comparative and functional genomics, do not contain the nucleotide sequences corresponding to their protein or protein domain entries. Results Using sequence information obtained from GenBank flat files covering the completely sequenced genomes of the COG and arCOG databases, we constructed NUCOCOG (nucleotide sequences containing COG databases) as an extended version including all nucleotide sequences and in addition the amino acid sequences originally utilized to construct the current COG and arCOG databases. We make available three comprehensive single XML files containing the complete databases including all sequence information. In addition, we provide a web interface as a utility suitable to browse the NUCOCOG database for sequence retrieval. The database is accessible at . Conclusion NUCOCOG offers the possibility to analyze any sequence related property in the context of the COG and arCOG framework simply by using script languages such as PERL applied to a large but single XML document. PMID:19014535

  12. Generation of non-genomic oligonucleotide tag sequences for RNA template-specific PCR

    PubMed Central

    Pinto, Fernando Lopes; Svensson, Håkan; Lindblad, Peter

    2006-01-01

    Background In order to overcome genomic DNA contamination in transcriptional studies, reverse template-specific polymerase chain reaction, a modification of reverse transcriptase polymerase chain reaction, is used. The possibility of using tags whose sequences are not found in the genome further improves reverse specific polymerase chain reaction experiments. Given the absence of software available to produce genome suitable tags, a simple tool to fulfill such need was developed. Results The program was developed in Perl, with separate use of the basic local alignment search tool, making the tool platform independent (known to run on Windows XP and Linux). In order to test the performance of the generated tags, several molecular experiments were performed. The results show that Tagenerator is capable of generating tags with good priming properties, which will deliberately not result in PCR amplification of genomic DNA. Conclusion The program Tagenerator is capable of generating tag sequences that combine genome absence with good priming properties for RT-PCR based experiments, circumventing the effects of genomic DNA contamination in an RNA sample. PMID:16820068

  13. ColorTree: a batch customization tool for phylogenic trees

    PubMed Central

    Chen, Wei-Hua; Lercher, Martin J

    2009-01-01

    Background Genome sequencing projects and comparative genomics studies typically aim to trace the evolutionary history of large gene sets, often requiring human inspection of hundreds of phylogenetic trees. If trees are checked for compatibility with an explicit null hypothesis (e.g., the monophyly of certain groups), this daunting task is greatly facilitated by an appropriate coloring scheme. Findings In this note, we introduce ColorTree, a simple yet powerful batch customization tool for phylogenic trees. Based on pattern matching rules, ColorTree applies a set of customizations to an input tree file, e.g., coloring labels or branches. The customized trees are saved to an output file, which can then be viewed and further edited by Dendroscope (a freely available tree viewer). ColorTree runs on any Perl installation as a stand-alone command line tool, and its application can thus be easily automated. This way, hundreds of phylogenic trees can be customized for easy visual inspection in a matter of minutes. Conclusion ColorTree allows efficient and flexible visual customization of large tree sets through the application of a user-supplied configuration file to multiple tree files. PMID:19646243

  14. ColorTree: a batch customization tool for phylogenic trees.

    PubMed

    Chen, Wei-Hua; Lercher, Martin J

    2009-07-31

    Genome sequencing projects and comparative genomics studies typically aim to trace the evolutionary history of large gene sets, often requiring human inspection of hundreds of phylogenetic trees. If trees are checked for compatibility with an explicit null hypothesis (e.g., the monophyly of certain groups), this daunting task is greatly facilitated by an appropriate coloring scheme. In this note, we introduce ColorTree, a simple yet powerful batch customization tool for phylogenic trees. Based on pattern matching rules, ColorTree applies a set of customizations to an input tree file, e.g., coloring labels or branches. The customized trees are saved to an output file, which can then be viewed and further edited by Dendroscope (a freely available tree viewer). ColorTree runs on any Perl installation as a stand-alone command line tool, and its application can thus be easily automated. This way, hundreds of phylogenic trees can be customized for easy visual inspection in a matter of minutes. ColorTree allows efficient and flexible visual customization of large tree sets through the application of a user-supplied configuration file to multiple tree files.

  15. KAnalyze: a fast versatile pipelined K-mer toolkit

    PubMed Central

    Audano, Peter; Vannberg, Fredrik

    2014-01-01

    Motivation: Converting nucleotide sequences into short overlapping fragments of uniform length, k-mers, is a common step in many bioinformatics applications. While existing software packages count k-mers, few are optimized for speed, offer an application programming interface (API), a graphical interface or contain features that make it extensible and maintainable. We designed KAnalyze to compete with the fastest k-mer counters, to produce reliable output and to support future development efforts through well-architected, documented and testable code. Currently, KAnalyze can output k-mer counts in a sorted tab-delimited file or stream k-mers as they are read. KAnalyze can process large datasets with 2 GB of memory. This project is implemented in Java 7, and the command line interface (CLI) is designed to integrate into pipelines written in any language. Results: As a k-mer counter, KAnalyze outperforms Jellyfish, DSK and a pipeline built on Perl and Linux utilities. Through extensive unit and system testing, we have verified that KAnalyze produces the correct k-mer counts over multiple datasets and k-mer sizes. Availability and implementation: KAnalyze is available on SourceForge: https://sourceforge.net/projects/kanalyze/ Contact: fredrik.vannberg@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24642064

  16. KAnalyze: a fast versatile pipelined k-mer toolkit.

    PubMed

    Audano, Peter; Vannberg, Fredrik

    2014-07-15

    Converting nucleotide sequences into short overlapping fragments of uniform length, k-mers, is a common step in many bioinformatics applications. While existing software packages count k-mers, few are optimized for speed, offer an application programming interface (API), a graphical interface or contain features that make it extensible and maintainable. We designed KAnalyze to compete with the fastest k-mer counters, to produce reliable output and to support future development efforts through well-architected, documented and testable code. Currently, KAnalyze can output k-mer counts in a sorted tab-delimited file or stream k-mers as they are read. KAnalyze can process large datasets with 2 GB of memory. This project is implemented in Java 7, and the command line interface (CLI) is designed to integrate into pipelines written in any language. As a k-mer counter, KAnalyze outperforms Jellyfish, DSK and a pipeline built on Perl and Linux utilities. Through extensive unit and system testing, we have verified that KAnalyze produces the correct k-mer counts over multiple datasets and k-mer sizes. KAnalyze is available on SourceForge: https://sourceforge.net/projects/kanalyze/. © The Author 2014. Published by Oxford University Press.

  17. Histopathologic Findings of Cutaneous Hyperpigmentation in Addison Disease and Immunostain of the Melanocytic Population.

    PubMed

    Fernandez-Flores, Angel; Cassarino, David S

    2017-12-01

    The histopathological features of cutaneous hyperpigmentation in Addison disease have very occasionally been reported, and they include acanthosis, hyperkeratosis, focal parakeratosis, spongiosis, superficial perivascular lymphocytic infiltrate, basal melanin hyperpigmentation, and superficial dermal melanophages. We present a study on 2 biopsies from the arm and the thigh in a 77-year-old woman with a long clinical history of Addison disease as well as senile purpura and alopecia of female pattern. The patient presented diffuse hyperpigmentation of the skin, more pronounced on her face, and left upper forehead. The skin biopsies showed no remarkable dermal inflammatory infiltrate with melanocytic hyperpigmentation of the basal layer of the epidermis as well as a mild amount of melanophages in the papillary dermis. In addition, we found lipofuscin in the luminal pole of the secretory epithelium of the eccrine glands. In the perieccrine areas, there was Perls-positive pigment in the cytoplasm of macrophages most likely related to the senile purpura. An immunohistochemical study with Melan-A showed a melanocyte/keratinocyte ratio of 1:20 (5%) in the arm and of less than 1:50 (only 2 melanocytes in the whole section; <2%) in the thigh.

  18. A simple model of circadian rhythms based on dimerization and proteolysis of PER and TIM

    PubMed Central

    Tyson, JJ; Hong, CI; Thron, CD; Novak, B

    1999-01-01

    Many organisms display rhythms of physiology and behavior that are entrained to the 24-h cycle of light and darkness prevailing on Earth. Under constant conditions of illumination and temperature, these internal biological rhythms persist with a period close to 1 day ("circadian"), but it is usually not exactly 24 h. Recent discoveries have uncovered stunning similarities among the molecular circuitries of circadian clocks in mice, fruit flies, and bread molds. A consensus picture is coming into focus around two proteins (called PER and TIM in fruit flies), which dimerize and then inhibit transcription of their own genes. Although this picture seems to confirm a venerable model of circadian rhythms based on time-delayed negative feedback, we suggest that just as crucial to the circadian oscillator is a positive feedback loop based on stabilization of PER upon dimerization. These ideas can be expressed in simple mathematical form (phase plane portraits), and the model accounts naturally for several hallmarks of circadian rhythms, including temperature compensation and the per(L) mutant phenotype. In addition, the model suggests how an endogenous circadian oscillator could have evolved from a more primitive, light-activated switch. PMID:20540926

  19. Improvement of the software Bernese for SLR data processing in the Main Metrological Centre of the State Time and Frequency Service

    NASA Astrophysics Data System (ADS)

    Tsyba, E.; Kaufman, M.

    2015-08-01

    Preparatory works for resuming operational calculations of the Earth rotation parameters based on the results of satellite laser ranging data processing (LAGEOS 1, LAGEOS 2) are to be completed in the Main Metrology Centre Of The State Time And Frequency Service (VNIIFTRI) in 2014. For this purpose BERNESE 5.2 software (Dach & Walser, 2014) was chosen as a base software which has been used for many years in the Main Metrological Centre of the State Time and Frequency Service to process phase observations of GLONASS and GPS satellites. Although in the BERNESE 5.2 software announced presentation the possibility of the SLR data processing is declared, it has not been fully implemented. In particular there is no such an essential element as corrective action (as input or resulting parameters) in the local time scale ("time bias"), etc. Therefore, additional program blocks have been developed and integrated into the BERNESE 5.2 software environment. The program blocks are written in Perl and Matlab program languages and can be used both for Windows and Linux, 32-bit and 64-bit platforms.

  20. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    PubMed

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  1. Micromilling enhances iron bioaccessibility from wholegrain wheat.

    PubMed

    Latunde-Dada, G O; Li, X; Parodi, A; Edwards, C H; Ellis, P R; Sharp, P A

    2014-11-19

    Cereals constitute important sources of iron in human diet; however, much of the iron in wheat is lost during processing for the production of white flour. This study employed novel food processing techniques to increase the bioaccessibility of naturally occurring iron in wheat. Iron was localized in wheat by Perl's Prussian blue staining. Soluble iron from digested wheat flour was measured by a ferrozine spectrophotometric assay. Iron bioaccessibility was determined using an in vitro simulated peptic-pancreatic digestion, followed by measurement of ferritin (a surrogate marker for iron absorption) in Caco-2 cells. Light microscopy revealed that iron in wheat was encapsulated in cells of the aleurone layer and remained intact after in vivo digestion and passage through the gastrointestinal tract. The solubility of iron in wholegrain wheat and in purified wheat aleurone increased significantly after enzymatic digestion with Driselase, and following mechanical disruption using micromilling. Furthermore, following in vitro simulated peptic-pancreatic digestion, iron bioaccessibility, measured as ferritin formation in Caco-2 cells, from micromilled aleurone flour was significantly higher (52%) than from whole aleurone flour. Taken together our data show that disruption of aleurone cell walls could increase iron bioaccessibility. Micromilled aleurone could provide an alternative strategy for iron fortification of cereal products.

  2. A collection of open source applications for mass spectrometry data mining.

    PubMed

    Gallardo, Óscar; Ovelleiro, David; Gay, Marina; Carrascal, Montserrat; Abian, Joaquin

    2014-10-01

    We present several bioinformatics applications for the identification and quantification of phosphoproteome components by MS. These applications include a front-end graphical user interface that combines several Thermo RAW formats to MASCOT™ Generic Format extractors (EasierMgf), two graphical user interfaces for search engines OMSSA and SEQUEST (OmssaGui and SequestGui), and three applications, one for the management of databases in FASTA format (FastaTools), another for the integration of search results from up to three search engines (Integrator), and another one for the visualization of mass spectra and their corresponding database search results (JsonVisor). These applications were developed to solve some of the common problems found in proteomic and phosphoproteomic data analysis and were integrated in the workflow for data processing and feeding on our LymPHOS database. Applications were designed modularly and can be used standalone. These tools are written in Perl and Python programming languages and are supported on Windows platforms. They are all released under an Open Source Software license and can be freely downloaded from our software repository hosted at GoogleCode. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Software for Managing Parametric Studies

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian

    2003-01-01

    The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.

  4. FBIS: A regional DNA barcode archival & analysis system for Indian fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar

    2012-01-01

    DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. The database is available for free at http://mail.nbfgr.res.in/fbis/

  5. DAVID-WS: a stateful web service to facilitate gene/protein list analysis

    PubMed Central

    Jiao, Xiaoli; Sherman, Brad T.; Huang, Da Wei; Stephens, Robert; Baseler, Michael W.; Lane, H. Clifford; Lempicki, Richard A.

    2012-01-01

    Summary: The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. Availability: The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html. Contact: xiaoli.jiao@nih.gov; rlempicki@nih.gov PMID:22543366

  6. AutoFACT: An Automatic Functional Annotation and Classification Tool

    PubMed Central

    Koski, Liisa B; Gray, Michael W; Lang, B Franz; Burger, Gertraud

    2005-01-01

    Background Assignment of function to new molecular sequence data is an essential step in genomics projects. The usual process involves similarity searches of a given sequence against one or more databases, an arduous process for large datasets. Results We present AutoFACT, a fully automated and customizable annotation tool that assigns biologically informative functions to a sequence. Key features of this tool are that it (1) analyzes nucleotide and protein sequence data; (2) determines the most informative functional description by combining multiple BLAST reports from several user-selected databases; (3) assigns putative metabolic pathways, functional classes, enzyme classes, GeneOntology terms and locus names; and (4) generates output in HTML, text and GFF formats for the user's convenience. We have compared AutoFACT to four well-established annotation pipelines. The error rate of functional annotation is estimated to be only between 1–2%. Comparison of AutoFACT to the traditional top-BLAST-hit annotation method shows that our procedure increases the number of functionally informative annotations by approximately 50%. Conclusion AutoFACT will serve as a useful annotation tool for smaller sequencing groups lacking dedicated bioinformatics staff. It is implemented in PERL and runs on LINUX/UNIX platforms. AutoFACT is available at . PMID:15960857

  7. A travel in the Echeveria genus wettability's world

    NASA Astrophysics Data System (ADS)

    Godeau, Guilhem; Laugier, Jean-Pierre; Orange, François; Godeau, René-Paul; Guittard, Frédéric; Darmanin, Thierry

    2017-07-01

    Nature is a constant source of inspiration for researchers and engineers. In this work, we study the wettability of various species from the genus Echeveria. All species studied present very strong hydrophobic properties with various water adhesions. Echeveria 'Perle von Nürnberg' has properties very close to superhydrophobicity with low water adhesion (sliding angle α = 15° and contact angle hysteresis H = 9°) while Echeveria pallida and Echeveria runyonii are completely sticky (parahydrophobic) and water droplets do not move even if the surface is inclined to 90°. This work shows that most of the differences in the hydrophobic properties depend on the amount of wax crystallization. However, Echeveria pulvinata shows special wettability results. Their leaves possess long hairs. When a water droplet is placed on the surface, the water droplet is completely sticky. When the size of the droplets becomes critical, the water droplets spread across the leaf surface displaying superhydrophilic properties. More investigations reveal that the hairs are highly hydrophobic and rough due to the presence of wax crystals while the bottom of the surface is smooth and hydrophilic. Such materials are excellent candidates for water harvesting systems and oil/water separation membranes.

  8. ARTS: automated randomization of multiple traits for study design.

    PubMed

    Maienschein-Cline, Mark; Lei, Zhengdeng; Gardeux, Vincent; Abbasi, Taimur; Machado, Roberto F; Gordeuk, Victor; Desai, Ankit A; Saraf, Santosh; Bahroos, Neil; Lussier, Yves

    2014-06-01

    Collecting data from large studies on high-throughput platforms, such as microarray or next-generation sequencing, typically requires processing samples in batches. There are often systematic but unpredictable biases from batch-to-batch, so proper randomization of biologically relevant traits across batches is crucial for distinguishing true biological differences from experimental artifacts. When a large number of traits are biologically relevant, as is common for clinical studies of patients with varying sex, age, genotype and medical background, proper randomization can be extremely difficult to prepare by hand, especially because traits may affect biological inferences, such as differential expression, in a combinatorial manner. Here we present ARTS (automated randomization of multiple traits for study design), which aids researchers in study design by automatically optimizing batch assignment for any number of samples, any number of traits and any batch size. ARTS is implemented in Perl and is available at github.com/mmaiensc/ARTS. ARTS is also available in the Galaxy Tool Shed, and can be used at the Galaxy installation hosted by the UIC Center for Research Informatics (CRI) at galaxy.cri.uic.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  10. Generation of a foveomacular transcriptome

    PubMed Central

    Bernstein, Steven; Wong, Paul W.

    2014-01-01

    Purpose Organizing molecular biologic data is a growing challenge since the rate of data accumulation is steadily increasing. Information relevant to a particular biologic query can be difficult to extract from the comprehensive databases currently available. We present a data collection and organization model designed to ameliorate these problems and applied it to generate an expressed sequence tag (EST)–based foveomacular transcriptome. Methods Using Perl, MySQL, EST libraries, screening, and human foveomacular gene expression as a model system, we generated a foveomacular transcriptome database enriched for molecularly relevant data. Results Using foveomacula as a gene expression model tissue, we identified and organized 6,056 genes expressed in that tissue. Of those identified genes, 3,480 had not been previously described as expressed in the foveomacula. Internal experimental controls as well as comparison of our data set to published data sets suggest we do not yet have a complete description of the foveomacula transcriptome. Conclusions We present an organizational method designed to amplify the utility of data pertinent to a specific research interest. Our method is generic enough to be applicable to a variety of conditions yet focused enough to allow for specialized study. PMID:24991187

  11. MeRIP-PF: An Easy-to-use Pipeline for High-resolution Peak-finding in MeRIP-Seq Data

    PubMed Central

    Li, Yuli; Song, Shuhui; Li, Cuiping; Yu, Jun

    2013-01-01

    RNA modifications, especially methylation of the N6 position of adenosine (A)—m6A, represent an emerging research frontier in RNA biology. With the rapid development of high-throughput sequencing technology, in-depth study of m6A distribution and function relevance becomes feasible. However, a robust method to effectively identify m6A-modified regions has not been available yet. Here, we present a novel high-efficiency and user-friendly analysis pipeline called MeRIP-PF for the signal identification of MeRIP-Seq data in reference to controls. MeRIP-PF provides a statistical P-value for each identified m6A region based on the difference of read distribution when compared to the controls and also calculates false discovery rate (FDR) as a cut off to differentiate reliable m6A regions from the background. Furthermore, MeRIP-PF also achieves gene annotation of m6A signals or peaks and produce outputs in both XLS and graphical format, which are useful for further study. MeRIP-PF is implemented in Perl and is freely available at http://software.big.ac.cn/MeRIP-PF.html. PMID:23434047

  12. Ceruloplasmin and hephaestin jointly protect the exocrine pancreas against oxidative damage by facilitating iron efflux.

    PubMed

    Chen, Min; Zheng, Jiashuo; Liu, Guohao; Xu, En; Wang, Junzhuo; Fuqua, Brie K; Vulpe, Chris D; Anderson, Gregory J; Chen, Huijun

    2018-05-31

    Little is known about the iron efflux from the pancreas, but it is likely that multicopper ferroxidases (MCFs) are involved in this process. We thus used hephaestin (Heph) and ceruloplasmin (Cp) single-knockout mice and Heph/Cp double-knockout mice to investigate the roles of MCFs in pancreatic iron homeostasis. We found that both HEPH and CP were expressed in the mouse pancreas, and that ablation of either MCF had limited effect on the pancreatic iron levels. However, ablation of both MCFs together led to extensive pancreatic iron deposition and severe oxidative damage. Perls' Prussian blue staining revealed that this iron deposition was predominantly in the exocrine pancreas, while the islets were spared. Consistent with these results, plasma lipase and trypsin were elevated in Heph/Cp knockout mice, indicating damage to the exocrine pancreas, while insulin secretion was not affected. These data indicate that HEPH and CP play mutually compensatory roles in facilitating iron efflux from the exocrine pancreas, and show that MCFs are able to protect the pancreas against iron-induced oxidative damage. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  14. Managing an archive of weather satellite images

    NASA Technical Reports Server (NTRS)

    Seaman, R. L.

    1992-01-01

    The author's experiences of building and maintaining an archive of hourly weather satellite pictures at NOAO are described. This archive has proven very popular with visiting and staff astronomers - especially on windy days and cloudy nights. Given access to a source of such pictures, a suite of simple shell and IRAF CL scripts can provide a great deal of robust functionality with little effort. These pictures and associated data products such as surface analysis (radar) maps and National Weather Service forecasts are updated hourly at anonymous ftp sites on the Internet, although your local Atsmospheric Sciences Department may prove to be a more reliable source. The raw image formats are unfamiliar to most astronomers, but reading them into IRAF is straightforward. Techniques for performing this format conversion at the host computer level are described which may prove useful for other chores. Pointers are given to sources of data and of software, including a package of example tools. These tools include shell and Perl scripts for downloading pictures, maps, and forecasts, as well as IRAF scripts and host level programs for translating the images into IRAF and GIF formats and for slicing & dicing the resulting images. Hints for displaying the images and for making hardcopies are given.

  15. Software for Better Documentation of Other Software

    NASA Technical Reports Server (NTRS)

    Pinedo, John

    2003-01-01

    The Literate Programming Extraction Engine is a Practical Extraction and Reporting Language- (PERL-)based computer program that facilitates and simplifies the implementation of a concept of self-documented literate programming in a fashion tailored to the typical needs of scientists. The advantage for the programmer is that documentation and source code are written side-by-side in the same file, reducing the likelihood that the documentation will be inconsistent with the code and improving the verification that the code performs its intended functions. The advantage for the user is the knowledge that the documentation matches the software because they come from the same file. This program unifies the documentation process for a variety of programming languages, including C, C++, and several versions of FORTRAN. This program can process the documentation in any markup language, and incorporates the LaTeX typesetting software. The program includes sample Makefile scripts for automating both the code-compilation (when appropriate) and documentation-generation processes into a single command-line statement. Also included are macro instructions for the Emacs display-editor software, making it easy for a programmer to toggle between editing in a code or a documentation mode.

  16. e-Stars Template Builder

    NASA Technical Reports Server (NTRS)

    Cox, Brian

    2003-01-01

    e-Stars Template Builder is a computer program that implements a concept of enabling users to rapidly gain access to information on projects of NASA's Jet Propulsion Laboratory. The information about a given project is not stored in a data base, but rather, in a network that follows the project as it develops. e-Stars Template Builder resides on a server computer, using Practical Extraction and Reporting Language (PERL) scripts to create what are called "e-STARS node templates," which are software constructs that allow for project-specific configurations. The software resides on the server and does not require specific software on the user machine except for an Internet browser. A user's computer need not be equipped with special software (other than an Internet-browser program). e-Stars Template Builder is compatible with Windows, Macintosh, and UNIX operating systems. A user invokes e-Stars Template Builder from a browser window. Operations that can be performed by the user include the creation of child processes and the addition of links and descriptions of documentation to existing pages or nodes. By means of this addition of "child processes" of nodes, a network that reflects the development of a project is generated.

  17. CAGEd-oPOSSUM: motif enrichment analysis from CAGE-derived TSSs.

    PubMed

    Arenillas, David J; Forrest, Alistair R R; Kawaji, Hideya; Lassmann, Timo; Wasserman, Wyeth W; Mathelier, Anthony

    2016-09-15

    With the emergence of large-scale Cap Analysis of Gene Expression (CAGE) datasets from individual labs and the FANTOM consortium, one can now analyze the cis-regulatory regions associated with gene transcription at an unprecedented level of refinement. By coupling transcription factor binding site (TFBS) enrichment analysis with CAGE-derived genomic regions, CAGEd-oPOSSUM can identify TFs that act as key regulators of genes involved in specific mammalian cell and tissue types. The webtool allows for the analysis of CAGE-derived transcription start sites (TSSs) either provided by the user or selected from ∼1300 mammalian samples from the FANTOM5 project with pre-computed TFBS predicted with JASPAR TF binding profiles. The tool helps power insights into the regulation of genes through the study of the specific usage of TSSs within specific cell types and/or under specific conditions. The CAGEd-oPOSUM web tool is implemented in Perl, MySQL and Apache and is available at http://cagedop.cmmt.ubc.ca/CAGEd_oPOSSUM CONTACTS: anthony.mathelier@ncmm.uio.no or wyeth@cmmt.ubc.ca Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  18. STOPGAP: a database for systematic target opportunity assessment by genetic association predictions.

    PubMed

    Shen, Judong; Song, Kijoung; Slater, Andrew J; Ferrero, Enrico; Nelson, Matthew R

    2017-09-01

    We developed the STOPGAP (Systematic Target OPportunity assessment by Genetic Association Predictions) database, an extensive catalog of human genetic associations mapped to effector gene candidates. STOPGAP draws on a variety of publicly available GWAS associations, linkage disequilibrium (LD) measures, functional genomic and variant annotation sources. Algorithms were developed to merge the association data, partition associations into non-overlapping LD clusters, map variants to genes and produce a variant-to-gene score used to rank the relative confidence among potential effector genes. This database can be used for a multitude of investigations into the genes and genetic mechanisms underlying inter-individual variation in human traits, as well as supporting drug discovery applications. Shell, R, Perl and Python scripts and STOPGAP R data files (version 2.5.1 at publication) are available at https://github.com/StatGenPRD/STOPGAP . Some of the most useful STOPGAP fields can be queried through an R Shiny web application at http://stopgapwebapp.com . matthew.r.nelson@gsk.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. [Psychological theory and implicit sociology.].

    PubMed

    Sévigny, R

    1983-01-01

    This text is based on the hypothesis that every theory on the psychology of personality must inevitably, in one manner or another, have a sociological referent, that is to say, it must refer to a body of knowledge which deals with a diversity of social contexts and their relations to individuals. According to this working hypothesis, such a sociology is implicit. This text then discusses a group of theoretical approaches in an effort to verify this hypothesis. This approach allows the extrication of diverse forms or diverse expressions of this implicit sociology within this context several currents are rapidly explored : psychoanalysis, behaviorism, gestalt, classical theory of needs. The author also comments on the approach, inspired by oriental techniques or philosophies, which employs the notion of myth to deepen self awareness. Finally, from the same perspective, he comments at greater length on the work of Carl Rogers, highlighting the diverse form of implicit sociology. In addition to Carl Rogers, this text refers to Freud, Jung, Adler, Reich, Perls, Goodman, Skinner as well as to Ginette Paris and various analysts of Taoism. In conclusion, the author indicates the significance of his analysis from double viewpoint of psychological theory and practice.

  20. DAVID-WS: a stateful web service to facilitate gene/protein list analysis.

    PubMed

    Jiao, Xiaoli; Sherman, Brad T; Huang, Da Wei; Stephens, Robert; Baseler, Michael W; Lane, H Clifford; Lempicki, Richard A

    2012-07-01

    The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html.

  1. Protein classification using probabilistic chain graphs and the Gene Ontology structure.

    PubMed

    Carroll, Steven; Pavlovic, Vladimir

    2006-08-01

    Probabilistic graphical models have been developed in the past for the task of protein classification. In many cases, classifications obtained from the Gene Ontology have been used to validate these models. In this work we directly incorporate the structure of the Gene Ontology into the graphical representation for protein classification. We present a method in which each protein is represented by a replicate of the Gene Ontology structure, effectively modeling each protein in its own 'annotation space'. Proteins are also connected to one another according to different measures of functional similarity, after which belief propagation is run to make predictions at all ontology terms. The proposed method was evaluated on a set of 4879 proteins from the Saccharomyces Genome Database whose interactions were also recorded in the GRID project. Results indicate that direct utilization of the Gene Ontology improves predictive ability, outperforming traditional models that do not take advantage of dependencies among functional terms. Average increase in accuracy (precision) of positive and negative term predictions of 27.8% (2.0%) over three different similarity measures and three subontologies was observed. C/C++/Perl implementation is available from authors upon request.

  2. DyNAVacS: an integrative tool for optimized DNA vaccine design.

    PubMed

    Harish, Nagarajan; Gupta, Rekha; Agarwal, Parul; Scaria, Vinod; Pillai, Beena

    2006-07-01

    DNA vaccines have slowly emerged as keystones in preventive immunology due to their versatility in inducing both cell-mediated as well as humoral immune responses. The design of an efficient DNA vaccine, involves choice of a suitable expression vector, ensuring optimal expression by codon optimization, engineering CpG motifs for enhancing immune responses and providing additional sequence signals for efficient translation. DyNAVacS is a web-based tool created for rapid and easy design of DNA vaccines. It follows a step-wise design flow, which guides the user through the various sequential steps in the design of the vaccine. Further, it allows restriction enzyme mapping, design of primers spanning user specified sequences and provides information regarding the vectors currently used for generation of DNA vaccines. The web version uses Apache HTTP server. The interface was written in HTML and utilizes the Common Gateway Interface scripts written in PERL for functionality. DyNAVacS is an integrated tool consisting of user-friendly programs, which require minimal information from the user. The software is available free of cost, as a web based application at URL: http://miracle.igib.res.in/dynavac/.

  3. ExpTreeDB: web-based query and visualization of manually annotated gene expression profiling experiments of human and mouse from GEO.

    PubMed

    Ni, Ming; Ye, Fuqiang; Zhu, Juanjuan; Li, Zongwei; Yang, Shuai; Yang, Bite; Han, Lu; Wu, Yongge; Chen, Ying; Li, Fei; Wang, Shengqi; Bo, Xiaochen

    2014-12-01

    Numerous public microarray datasets are valuable resources for the scientific communities. Several online tools have made great steps to use these data by querying related datasets with users' own gene signatures or expression profiles. However, dataset annotation and result exhibition still need to be improved. ExpTreeDB is a database that allows for queries on human and mouse microarray experiments from Gene Expression Omnibus with gene signatures or profiles. Compared with similar applications, ExpTreeDB pays more attention to dataset annotations and result visualization. We introduced a multiple-level annotation system to depict and organize original experiments. For example, a tamoxifen-treated cell line experiment is hierarchically annotated as 'agent→drug→estrogen receptor antagonist→tamoxifen'. Consequently, retrieved results are exhibited by an interactive tree-structured graphics, which provide an overview for related experiments and might enlighten users on key items of interest. The database is freely available at http://biotech.bmi.ac.cn/ExpTreeDB. Web site is implemented in Perl, PHP, R, MySQL and Apache. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. CsSNP: A Web-Based Tool for the Detecting of Comparative Segments SNPs.

    PubMed

    Wang, Yi; Wang, Shuangshuang; Zhou, Dongjie; Yang, Shuai; Xu, Yongchao; Yang, Chao; Yang, Long

    2016-07-01

    SNP (single nucleotide polymorphism) is a popular tool for the study of genetic diversity, evolution, and other areas. Therefore, it is necessary to develop a convenient, utility, robust, rapid, and open source detecting-SNP tool for all researchers. Since the detection of SNPs needs special software and series steps including alignment, detection, analysis and present, the study of SNPs is limited for nonprofessional users. CsSNP (Comparative segments SNP, http://biodb.sdau.edu.cn/cssnp/ ) is a freely available web tool based on the Blat, Blast, and Perl programs to detect comparative segments SNPs and to show the detail information of SNPs. The results are filtered and presented in the statistics figure and a Gbrowse map. This platform contains the reference genomic sequences and coding sequences of 60 plant species, and also provides new opportunities for the users to detect SNPs easily. CsSNP is provided a convenient tool for nonprofessional users to find comparative segments SNPs in their own sequences, and give the users the information and the analysis of SNPs, and display these data in a dynamic map. It provides a new method to detect SNPs and may accelerate related studies.

  5. Poly(A)-tag deep sequencing data processing to extract poly(A) sites.

    PubMed

    Wu, Xiaohui; Ji, Guoli; Li, Qingshun Quinn

    2015-01-01

    Polyadenylation [poly(A)] is an essential posttranscriptional processing step in the maturation of eukaryotic mRNA. The advent of next-generation sequencing (NGS) technology has offered feasible means to generate large-scale data and new opportunities for intensive study of polyadenylation, particularly deep sequencing of the transcriptome targeting the junction of 3'-UTR and the poly(A) tail of the transcript. To take advantage of this unprecedented amount of data, we present an automated workflow to identify polyadenylation sites by integrating NGS data cleaning, processing, mapping, normalizing, and clustering. In this pipeline, a series of Perl scripts are seamlessly integrated to iteratively map the single- or paired-end sequences to the reference genome. After mapping, the poly(A) tags (PATs) at the same genome coordinate are grouped into one cleavage site, and the internal priming artifacts removed. Then the ambiguous region is introduced to parse the genome annotation for cleavage site clustering. Finally, cleavage sites within a close range of 24 nucleotides and from different samples can be clustered into poly(A) clusters. This procedure could be used to identify thousands of reliable poly(A) clusters from millions of NGS sequences in different tissues or treatments.

  6. Software development for a gamma-ray burst rapid-response observatory in the US Virgin Islands.

    NASA Astrophysics Data System (ADS)

    Davis, K. A.; Giblin, T. W.; Neff, J. E.; Hakkila, J.; Hartmann, D.

    2004-12-01

    The site is situated near the crest of Crown Mountain on the island of St. Thomas in the US Virgin Islands. The observing site is strategically located 65 W longitude, placing it as the most eastern GRB-dedicated observing site in the western hemisphere. The observatory has a 0.5 m robotic telescope and a Marconi 4240 2048 by 2048 CCD with BVRI filters. The field of view is identical to that of the XRT onboard Swift, 19 by 19 arc minutes. The telescope is operated through the Talon telescope control software. The observatory is notified of a burst trigger through the GRB Coordinates Network (GCN). This GCN notification is received through a socket connection to the control computer on site. A Perl script passes this information to the Talon software, which automatically interrupts concurrent observations and inserts a new GRB observing schedule. Once the observations are made the resulting images are then analyzed in IRAF. A source extraction is necessary to identify known sources and the optical transient. The system is being calibrated for automatic GRB response and is expected to be ready to follow up Swift observations. This work has been supported by NSF and NASA-EPSCoR.

  7. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  8. Telemetry-Enhancing Scripts

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.

    2009-01-01

    Scripts Providing a Cool Kit of Telemetry Enhancing Tools (SPACKLE) is a set of software tools that fill gaps in capabilities of other software used in processing downlinked data in the Mars Exploration Rovers (MER) flight and test-bed operations. SPACKLE tools have helped to accelerate the automatic processing and interpretation of MER mission data, enabling non-experts to understand and/or use MER query and data product command simulation software tools more effectively. SPACKLE has greatly accelerated some operations and provides new capabilities. The tools of SPACKLE are written, variously, in Perl or the C or C++ language. They perform a variety of search and shortcut functions that include the following: Generating text-only, Event Report-annotated, and Web-enhanced views of command sequences; Labeling integer enumerations with their symbolic meanings in text messages and engineering channels; Systematic detecting of corruption within data products; Generating text-only displays of data-product catalogs including downlink status; Validating and labeling of commands related to data products; Performing of convenient searches of detailed engineering data spanning multiple Martian solar days; Generating tables of initial conditions pertaining to engineering, health, and accountability data; Simplified construction and simulation of command sequences; and Fast time format conversions and sorting.

  9. IMGT/3Dstructure-DB and IMGT/StructuralQuery, a database and a tool for immunoglobulin, T cell receptor and MHC structural data

    PubMed Central

    Kaas, Quentin; Ruiz, Manuel; Lefranc, Marie-Paule

    2004-01-01

    IMGT/3Dstructure-DB and IMGT/Structural-Query are a novel 3D structure database and a new tool for immunological proteins. They are part of IMGT, the international ImMunoGenetics information system®, a high-quality integrated knowledge resource specializing in immunoglobulins (IG), T cell receptors (TR), major histocompatibility complex (MHC) and related proteins of the immune system (RPI) of human and other vertebrate species, which consists of databases, Web resources and interactive on-line tools. IMGT/3Dstructure-DB data are described according to the IMGT Scientific chart rules based on the IMGT-ONTOLOGY concepts. IMGT/3Dstructure-DB provides IMGT gene and allele identification of IG, TR and MHC proteins with known 3D structures, domain delimitations, amino acid positions according to the IMGT unique numbering and renumbered coordinate flat files. Moreover IMGT/3Dstructure-DB provides 2D graphical representations (or Collier de Perles) and results of contact analysis. The IMGT/StructuralQuery tool allows search of this database based on specific structural characteristics. IMGT/3Dstructure-DB and IMGT/StructuralQuery are freely available at http://imgt.cines.fr. PMID:14681396

  10. The CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less

  11. Hydroxyurea could be a good clinically relevant iron chelator.

    PubMed

    Italia, Khushnooma; Colah, Roshan; Ghosh, Kanjaksha

    2013-01-01

    Our previous study showed a reduction in serum ferritin of β-thalassemia patients on hydroxyurea therapy. Here we aimed to evaluate the efficacy of hydroxyurea alone and in combination with most widely used iron chelators like deferiprone and deferasirox for reducing iron from experimentally iron overloaded mice. 70 BALB/c mice received intraperitonial injections of iron-sucrose. The mice were then divided into 8 groups and were orally given hydroxyurea, deferiprone or deferasirox alone and their combinations for 4 months. CBC, serum-ferritin, TBARS, sTfr and hepcidin were evaluated before and after iron overload and subsequently after 4 months of drug therapy. All animals were then killed. Iron staining of the heart and liver tissue was done using Perl's Prussian Blue stain. Dry weight of iron in the heart and liver was determined by atomic absorption spectrometry. Increased serum-ferritin, TBARS, hepcidin and dry weight of iron in the liver and heart showed a significant reduction in groups treated with iron chelators with maximum reduction in the group treated with a combination of deferiprone, deferasirox and hydroxyurea. Thus hydroxyurea proves its role in reducing iron from iron overloaded mice. The iron chelating effect of these drugs can also be increased if given in combination.

  12. Silicon solar cells: Past, present and the future

    NASA Astrophysics Data System (ADS)

    Lee, Youn-Jung; Kim, Byung-Sung; Ifitiquar, S. M.; Park, Cheolmin; Yi, Junsin

    2014-08-01

    There has been a great demand for renewable energy for the last few years. However, the solar cell industry is currently experiencing a temporary plateau due to a sluggish economy and an oversupply of low-quality cells. The current situation can be overcome by reducing the production cost and by improving the cell is conversion efficiency. New materials such as compound semiconductor thin films have been explored to reduce the fabrication cost, and structural changes have been explored to improve the cell's efficiency. Although a record efficiency of 24.7% is held by a PERL — structured silicon solar cell and 13.44% has been realized using a thin silicon film, the mass production of these cells is still too expensive. Crystalline and amorphous silicon — based solar cells have led the solar industry and have occupied more than half of the market so far. They will remain so in the future photovoltaic (PV) market by playing a pivotal role in the solar industry. In this paper, we discuss two primary approaches that may boost the silicon — based solar cell market; one is a high efficiency approach and the other is a low cost approach. We also discuss the future prospects of various solar cells.

  13. CAGEd-oPOSSUM: motif enrichment analysis from CAGE-derived TSSs

    PubMed Central

    Arenillas, David J.; Forrest, Alistair R. R.; Kawaji, Hideya; Lassmann, Timo; Wasserman, Wyeth W.; Mathelier, Anthony

    2016-01-01

    With the emergence of large-scale Cap Analysis of Gene Expression (CAGE) datasets from individual labs and the FANTOM consortium, one can now analyze the cis-regulatory regions associated with gene transcription at an unprecedented level of refinement. By coupling transcription factor binding site (TFBS) enrichment analysis with CAGE-derived genomic regions, CAGEd-oPOSSUM can identify TFs that act as key regulators of genes involved in specific mammalian cell and tissue types. The webtool allows for the analysis of CAGE-derived transcription start sites (TSSs) either provided by the user or selected from ∼1300 mammalian samples from the FANTOM5 project with pre-computed TFBS predicted with JASPAR TF binding profiles. The tool helps power insights into the regulation of genes through the study of the specific usage of TSSs within specific cell types and/or under specific conditions. Availability and Implementation: The CAGEd-oPOSUM web tool is implemented in Perl, MySQL and Apache and is available at http://cagedop.cmmt.ubc.ca/CAGEd_oPOSSUM. Contacts: anthony.mathelier@ncmm.uio.no or wyeth@cmmt.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27334471

  14. ICT: isotope correction toolbox.

    PubMed

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Development of Very Long Baseline Interferometry (VLBI) techniques in New Zealand: Array simulation, image synthesis and analysis

    NASA Astrophysics Data System (ADS)

    Weston, S. D.

    2008-04-01

    This thesis presents the design and development of a process to model Very Long Base Line Interferometry (VLBI) aperture synthesis antenna arrays. In line with the Auckland University of Technology (AUT) Institute for Radiophysics and Space Research (IRSR) aims to develop the knowledge, skills and experience within New Zealand, extensive use of existing radio astronomical software has been incorporated into the process namely AIPS (Astronomical Imaging Processing System), MIRIAD (a radio interferometry data reduction package) and DIFMAP (a program for synthesis imaging of visibility data from interferometer arrays of radio telescopes). This process has been used to model various antenna array configurations for two proposed New Zealand sites for antenna in a VLBI array configuration with existing Australian facilities and a passable antenna at Scott Base in Antarctica; and the results are presented in an attempt to demonstrate the improvement to be gained by joint trans-Tasman VLBI observation. It is hoped these results and process will assist the planning and placement of proposed New Zealand radio telescopes for cooperation with groups such as the Australian Long Baseline Array (LBA), others in the Pacific Rim and possibly globally; also potential future involvement of New Zealand with the SKA. The developed process has also been used to model a phased building schedule for the SKA in Australia and the addition of two antennas in New Zealand. This has been presented to the wider astronomical community via the Royal Astronomical Society of New Zealand Journal, and is summarized in this thesis with some additional material. A new measure of quality ("figure of merit") for comparing the original model image and final CLEAN images by utilizing normalized 2-D cross correlation is evaluated as an alternative to the existing subjective visual operator image comparison undertaken to date by other groups. This new unit of measure is then used ! in the presentation of the results to provide a quantative comparison of the different array configurations modelled. Included in the process is the development of a new antenna array visibility program which was based on a Perl code script written by Prof Steven Tingay to plot antenna visibilities for the Australian Square Kilometre Array (SKA) proposal. This has been expanded and improved removing the hard coded fixed assumptions for the SKA configuration, providing a new useful and flexible program for the wider astronomical community. A prototype user interface using html/cgi/perl was developed for the process so that the underlying software packages can be served over the web to a user via an internet browser. This was used to demonstrate how easy it is to provide a friendlier interface compared to the existing cumbersome and difficult command line driven interfaces (although the command line can be retained for more experienced users).

  16. Accounting Data to Web Interface Using PERL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargeaves, C

    2001-08-13

    This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less

  17. Simple, Scalable, Script-Based Science Processor (S4P)

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the appropriate executable (registered in a configuration file for that station). Although S4P is written in Perl, the executables associated with a station can be any program that can be run from the command line, i.e., non-interactively. An S4P instance is typically monitored using a simple Graphical User Interface. However, the reliance of S4P on UNIX files and directories also allows visibility into the state of stations and jobs using standard operating system commands, permitting remote monitor/control over low-bandwidth connections. S4P is being used as the foundation for several small- to medium-size systems for data mining, on-demand subsetting, processing of direct broadcast Moderate Resolution Imaging Spectroradiometer (MODIS) data, and Quick-Response MODIS processing. It has also been used to implement a large-scale system to process MODIS Level 1 and Level 2 Standard Products, which will ultimately process close to 2 TB/day.

  18. [A customized method for information extraction from unstructured text data in the electronic medical records].

    PubMed

    Bao, X Y; Huang, W J; Zhang, K; Jin, M; Li, Y; Niu, C Z

    2018-04-18

    There is a huge amount of diagnostic or treatment information in electronic medical record (EMR), which is a concrete manifestation of clinicians actual diagnosis and treatment details. Plenty of episodes in EMRs, such as complaints, present illness, past history, differential diagnosis, diagnostic imaging, surgical records, reflecting details of diagnosis and treatment in clinical process, adopt Chinese description of natural language. How to extract effective information from these Chinese narrative text data, and organize it into a form of tabular for analysis of medical research, for the practical utilization of clinical data in the real world, is a difficult problem in Chinese medical data processing. Based on the EMRs narrative text data in a tertiary hospital in China, a customized information extracting rules learning, and rule based information extraction methods is proposed. The overall method consists of three steps, which includes: (1) Step 1, a random sample of 600 copies (including the history of present illness, past history, personal history, family history, etc.) of the electronic medical record data, was extracted as raw corpora. With our developed Chinese clinical narrative text annotation platform, the trained clinician and nurses marked the tokens and phrases in the corpora which would be extracted (with a history of diabetes as an example). (2) Step 2, based on the annotated corpora clinical text data, some extraction templates were summarized and induced firstly. Then these templates were rewritten using regular expressions of Perl programming language, as extraction rules. Using these extraction rules as basic knowledge base, we developed extraction packages in Perl, for extracting data from the EMRs text data. In the end, the extracted data items were organized in tabular data format, for later usage in clinical research or hospital surveillance purposes. (3) As the final step of the method, the evaluation and validation of the proposed methods were implemented in the National Clinical Service Data Integration Platform, and we checked the extraction results using artificial verification and automated verification combined, proved the effectiveness of the method. For all the patients with diabetes as diagnosed disease in the Department of Endocrine in the hospital, the medical history episode of these patients showed that, altogether 1 436 patients were dismissed in 2015, and a history of diabetes medical records extraction results showed that the recall rate was 87.6%, the accuracy rate was 99.5%, and F-Score was 0.93. For all the 10% patients (totally 1 223 patients) with diabetes by the dismissed dates of August 2017 in the same department, the extracted diabetes history extraction results showed that the recall rate was 89.2%, the accuracy rate was 99.2%, F-Score was 0.94. This study mainly adopts the combination of natural language processing and rule-based information extraction, and designs and implements an algorithm for extracting customized information from unstructured Chinese electronic medical record text data. It has better results than existing work.

  19. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    PubMed

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. A Response to Odland et al.'s Misleading, Alarmist Estimates of Risk for Overpathologizing when Interpreting the MMPI-2-RF.

    PubMed

    Tarescavage, Anthony M; Ben-Porath, Yossef S

    2015-01-01

    In a recently published article in this journal, Odland, Lammy, Perle, Martin, and Grote report Monte Carlo-simulated normative base rates of scale elevations on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Their primary conclusion--reflected in the title of their article--is that MMPI-2-RF interpretation is associated with "high risk of pathologizing healthy adults" when the 40 substantive scales of the test are simultaneously interpreted. In this paper, we describe how their conclusion follows from several faulty premises, three of which were already debunked in an earlier article and remain false despite counterarguments proposed by Odland and colleagues. We also address these authors' misinterpretation of their analyses and, furthermore, their premise that MMPI-2-RF interpretive guidelines are flawed because they "currently do not account for a basic statistical principle: Type I (or alpha) error inflation" (p. 1). This premise is irrelevant to psychological test interpretation and misaligned with neuropsychological testing literature cited in support of it. Consistent with suggestions by some of the authors they cite, we reiterate MMPI-2-RF interpretive guidelines designed to mitigate the impact of measurement error (not alpha error) by way of a scientific assessment approach that relies on integration of information derived from multiple sources.

  1. FreeContact: fast and free software for protein contact prediction from residue co-evolution.

    PubMed

    Kaján, László; Hopf, Thomas A; Kalaš, Matúš; Marks, Debora S; Rost, Burkhard

    2014-03-26

    20 years of improved technology and growing sequences now renders residue-residue contact constraints in large protein families through correlated mutations accurate enough to drive de novo predictions of protein three-dimensional structure. The method EVfold broke new ground using mean-field Direct Coupling Analysis (EVfold-mfDCA); the method PSICOV applied a related concept by estimating a sparse inverse covariance matrix. Both methods (EVfold-mfDCA and PSICOV) are publicly available, but both require too much CPU time for interactive applications. On top, EVfold-mfDCA depends on proprietary software. Here, we present FreeContact, a fast, open source implementation of EVfold-mfDCA and PSICOV. On a test set of 140 proteins, FreeContact was almost eight times faster than PSICOV without decreasing prediction performance. The EVfold-mfDCA implementation of FreeContact was over 220 times faster than PSICOV with negligible performance decrease. EVfold-mfDCA was unavailable for testing due to its dependency on proprietary software. FreeContact is implemented as the free C++ library "libfreecontact", complete with command line tool "freecontact", as well as Perl and Python modules. All components are available as Debian packages. FreeContact supports the BioXSD format for interoperability. FreeContact provides the opportunity to compute reliable contact predictions in any environment (desktop or cloud).

  2. The Proteins API: accessing key integrated protein and genome information

    PubMed Central

    Antunes, Ricardo; Alpi, Emanuele; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd

    2017-01-01

    Abstract The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to ‘talk’ to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). PMID:28383659

  3. AIDE - Advanced Intrusion Detection Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cathy L.

    2013-04-28

    Would you like to know when someone has dropped an undesirable executable binary on our system? What about something less malicious such as a software installation by a user? What about the user who decides to install a newer version of mod_perl or PHP on your web server without letting you know beforehand? Or even something as simple as when an undocumented config file change is made by another member of the admin group? Do you even want to know about all the changes that happen on a daily basis on your server? The purpose of an intrusion detection systemmore » (IDS) is to detect unauthorized, possibly malicious activity. The purpose of a host-based IDS, or file integrity checker, is check for unauthorized changes to key system files, binaries, libraries, and directories on the system. AIDE is an Open Source file and directory integrity checker. AIDE will let you know when a file or directory has been added, deleted, modified. It is included with the Red Hat Enterprise 6. It is available for other Linux distros. This is a case study describing the process of configuring AIDE on an out of the box RHEL6 installation. Its goal is to illustrate the thinking and the process by which a useful AIDE configuration is built.« less

  4. Refractory Materials for Flame Deflector Protection System Corrosion Control: Similar Industries and/or Launch Facilities Survey

    NASA Technical Reports Server (NTRS)

    Calle, Luz Marina; Hintze, Paul E.; Parlier, Christopher R.; Coffman, Brekke E.; Sampson, Jeffrey W.; Kolody, Mark R.; Curran, Jerome P.; Perusich, Stephen A.; Trejo, David; Whitten, Mary C.; hide

    2009-01-01

    A trade study and litera ture survey of refractory materials (fi rebrick. refractory concrete. and si licone and epoxy ablatives) were conducted to identify candidate replacement materials for Launch Complexes 39A and 398 at Kennedy Space Center (KSC). In addition, site vis its and in terviews with industry expens and vendors of refractory materials were conducted. As a result of the si te visits and interviews, several products were identified for launch applications. Firebrick is costly to procure and install and was not used in the si tes studied. Refractory concrete is gunnable. adheres well. and costs less 10 install. Martyte. a ceramic fi lled epoxy. can protect structural stccl but is costly. difficullto apply. and incompatible with silicone ablatives. Havanex, a phenolic ablative material, is easy to apply but is costly and requires frequent replacement. Silicone ablatives are ineJ[pensive, easy to apply. and perl'onn well outside of direct rocket impingement areas. but refractory concrete and epoxy ablatives provide better protection against direcl rocket exhaust. None of the prodUCIS in this trade study can be considered a panacea for these KSC launch complexes. but the refractory products. individually or in combination, may be considered for use provided the appropriate testing requirements and specifications are met.

  5. REDO: RNA Editing Detection in Plant Organelles Based on Variant Calling Results.

    PubMed

    Wu, Shuangyang; Liu, Wanfei; Aljohi, Hasan Awad; Alromaih, Sarah A; Alanazi, Ibrahim O; Lin, Qiang; Yu, Jun; Hu, Songnian

    2018-05-01

    RNA editing is a post-transcriptional or cotranscriptional process that changes the sequence of the precursor transcript by substitutions, insertions, or deletions. Almost all of the land plants undergo RNA editing in organelles (plastids and mitochondria). Although several software tools have been developed to identify RNA editing events, there has been a great challenge to distinguish true RNA editing events from genome variation, sequencing errors, and other factors. Here we introduce REDO, a comprehensive application tool for identifying RNA editing events in plant organelles based on variant call format files from RNA-sequencing data. REDO is a suite of Perl scripts that illustrate a bunch of attributes of RNA editing events in figures and tables. REDO can also detect RNA editing events in multiple samples simultaneously and identify the significant differential proportion of RNA editing loci. Comparing with similar tools, such as REDItools, REDO runs faster with higher accuracy, and more specificity at the cost of slightly lower sensitivity. Moreover, REDO annotates each RNA editing site in RNAs, whereas REDItools reports only possible RNA editing sites in genome, which need additional steps to obtain RNA editing profiles for RNAs. Overall, REDO can identify potential RNA editing sites easily and provide several functions such as detailed annotations, statistics, figures, and significantly differential proportion of RNA editing sites among different samples.

  6. MeRIP-PF: an easy-to-use pipeline for high-resolution peak-finding in MeRIP-Seq data.

    PubMed

    Li, Yuli; Song, Shuhui; Li, Cuiping; Yu, Jun

    2013-02-01

    RNA modifications, especially methylation of the N(6) position of adenosine (A)-m(6)A, represent an emerging research frontier in RNA biology. With the rapid development of high-throughput sequencing technology, in-depth study of m(6)A distribution and function relevance becomes feasible. However, a robust method to effectively identify m(6)A-modified regions has not been available yet. Here, we present a novel high-efficiency and user-friendly analysis pipeline called MeRIP-PF for the signal identification of MeRIP-Seq data in reference to controls. MeRIP-PF provides a statistical P-value for each identified m(6)A region based on the difference of read distribution when compared to the controls and also calculates false discovery rate (FDR) as a cut off to differentiate reliable m(6)A regions from the background. Furthermore, MeRIP-PF also achieves gene annotation of m(6)A signals or peaks and produce outputs in both XLS and graphical format, which are useful for further study. MeRIP-PF is implemented in Perl and is freely available at http://software.big.ac.cn/MeRIP-PF.html. Copyright © 2013. Production and hosting by Elsevier Ltd.

  7. Haemophilus influenzae Genome Database (HIGDB): a single point web resource for Haemophilus influenzae.

    PubMed

    Swetha, Rayapadi G; Kala Sekar, Dinesh Kumar; Ramaiah, Sudha; Anbarasu, Anand; Sekar, Kanagaraj

    2014-12-01

    Haemophilus influenzae (H. Influenzae) is the causative agent of pneumonia, bacteraemia and meningitis. The organism is responsible for large number of deaths in both developed and developing countries. Even-though the first bacterial genome to be sequenced was that of H. Influenzae, there is no exclusive database dedicated for H. Influenzae. This prompted us to develop the Haemophilus influenzae Genome Database (HIGDB). All data of HIGDB are stored and managed in MySQL database. The HIGDB is hosted on Solaris server and developed using PERL modules. Ajax and JavaScript are used for the interface development. The HIGDB contains detailed information on 42,741 proteins, 18,077 genes including 10 whole genome sequences and also 284 three dimensional structures of proteins of H. influenzae. In addition, the database provides "Motif search" and "GBrowse". The HIGDB is freely accessible through the URL: http://bioserver1.physics.iisc.ernet.in/HIGDB/. The HIGDB will be a single point access for bacteriological, clinical, genomic and proteomic information of H. influenzae. The database can also be used to identify DNA motifs within H. influenzae genomes and to compare gene or protein sequences of a particular strain with other strains of H. influenzae. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Perl Tools for Automating Satellite Ground Systems

    NASA Technical Reports Server (NTRS)

    McLean, David; Haar, Therese; McDonald, James

    2000-01-01

    The freeware scripting language Pert offers many opportunities for automating satellite ground systems for new satellites as well as older, in situ systems. This paper describes a toolkit that has evolved from of the experiences gained by using Pert to automate the ground system for the Compton Gamma Ray Observatory (CGRO) and for automating some of the elements in the Earth Observing System Data and Operations System (EDOS) ground system at Goddard Space Flight Center (GSFC). CGRO is an older ground system that was forced to automate because of fund cuts. Three 8 hour shifts were cut back to one 8 hour shift, 7 days per week. EDOS supports a new mission called Terra, launched December 1999 that requires distribution and tracking of mission-critical reports throughout the world. Both of these ground systems use Pert scripts to process data and display it on the Internet as well as scripts to coordinate many of the other systems that make these ground systems work as a coherent whole. Another task called Automated Multimodal Trend Analysis System (AMTAS) is looking at technology for isolation and recovery of spacecraft problems. This effort has led to prototypes that seek to evaluate various tools and technology that meet at least some of the AMTAS goals. The tools, experiences, and lessons learned by implementing these systems are described here.

  9. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  10. Dynamic Interactive Educational Diabetes Simulations Using the World Wide Web: An Experience of More Than 15 Years with AIDA Online

    PubMed Central

    Lehmann, Eldon D.; DeWolf, Dennis K.; Novotny, Christopher A.; Reed, Karen; Gotwals, Robert R.

    2014-01-01

    Background. AIDA is a widely available downloadable educational simulator of glucose-insulin interaction in diabetes. Methods. A web-based version of AIDA was developed that utilises a server-based architecture with HTML FORM commands to submit numerical data from a web-browser client to a remote web server. AIDA online, located on a remote server, passes the received data through Perl scripts which interactively produce 24 hr insulin and glucose simulations. Results. AIDA online allows users to modify the insulin regimen and diet of 40 different prestored “virtual diabetic patients” on the internet or create new “patients” with user-generated regimens. Multiple simulations can be run, with graphical results viewed via a standard web-browser window. To date, over 637,500 diabetes simulations have been run at AIDA online, from all over the world. Conclusions. AIDA online's functionality is similar to the downloadable AIDA program, but the mode of implementation and usage is different. An advantage to utilising a server-based application is the flexibility that can be offered. New modules can be added quickly to the online simulator. This has facilitated the development of refinements to AIDA online, which have instantaneously become available around the world, with no further local downloads or installations being required. PMID:24511312

  11. Packaging Software Assets for Reuse

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Marshall, J. J.; Downs, R. R.

    2010-12-01

    The reuse of existing software assets such as code, architecture, libraries, and modules in current software and systems development projects can provide many benefits, including reduced costs, in time and effort, and increased reliability. Many reusable assets are currently available in various online catalogs and repositories, usually broken down by disciplines such as programming language (Ibiblio for Maven/Java developers, PyPI for Python developers, CPAN for Perl developers, etc.). The way these assets are packaged for distribution can play a role in their reuse - an asset that is packaged simply and logically is typically easier to understand, install, and use, thereby increasing its reusability. A well-packaged asset has advantages in being more reusable and thus more likely to provide benefits through its reuse. This presentation will discuss various aspects of software asset packaging and how they can affect the reusability of the assets. The characteristics of well-packaged software will be described. A software packaging domain model will be introduced, and some existing packaging approaches examined. An example case study of a Reuse Enablement System (RES), currently being created by near-term Earth science decadal survey missions, will provide information about the use of the domain model. Awareness of these factors will help software developers package their reusable assets so that they can provide the most benefits for software reuse.

  12. Technique for diamond machining large ZnSe grisms for the Rapid Infrared/Imager Spectrograph (RIMAS)

    NASA Astrophysics Data System (ADS)

    Kuzmenko, Paul J.; Little, Steve L.; Kutyrev, Alexander S.; Capone, John I.

    2016-07-01

    The Rapid Infrared Imager/Spectrograph (RIMAS) is an instrument designed to observe gamma ray burst afterglows following initial detection by the SWIFT satellite. Operating in the near infrared between 0.9 and 2.4 μm, it has capabilities for both low resolution (R 25) and moderate resolution (R 4000) spectroscopy. Two zinc selenide (ZnSe) grisms provide dispersion in the moderate resolution mode: one covers the Y and J bands and the other covers the H and K. Each has a clear aperture of 44 mm. The YJ grism has a blaze angle of 49.9° with a 40 μm groove spacing. The HK grism is blazed at 43.1° with a 50 μm grooves spacing. Previous fabrication of ZnSe grisms on the Precision Engineering Research Lathe (PERL II) at LLNL has demonstrated the importance of surface preparation, tool and fixture design, tight thermal control, and backup power sources for the machine. The biggest challenges in machining the RIMAS grisms are the large grooved area, which indicates long machining time, and the relatively steep blaze angle, which means that the grism wavefront error is much more sensitive to lathe metrology errors. Mitigating techniques are described.

  13. Tool for Merging Proposals Into DSN Schedules

    NASA Technical Reports Server (NTRS)

    Khanampornpan, Teerapat; Kwok, John; Call, Jared

    2008-01-01

    A Practical Extraction and Reporting Language (Perl) script called merge7da has been developed to facilitate determination, by a project scheduler in NASA's Deep Space Network, of whether a proposal for use of the DSN could create a conflict with the current DSN schedule. Prior to the development of merge7da, there was no way to quickly identify potential schedule conflicts: it was necessary to submit a proposal and wait a day or two for a response from a DSN scheduling facility. By using merge7da to detect and eliminate potential schedule conflicts before submitting a proposal, a project scheduler saves time and gains assurance that the proposal will probably be accepted. merge7da accepts two input files, one of which contains the current DSN schedule and is in a DSN-standard format called '7da'. The other input file contains the proposal and is in another DSN-standard format called 'C1/C2'. merge7da processes the two input files to produce a merged 7da-format output file that represents the DSN schedule as it would be if the proposal were to be adopted. This 7da output file can be loaded into various DSN scheduling software tools now in use.

  14. Test case for VVER-1000 complex modeling using MCU and ATHLET

    NASA Astrophysics Data System (ADS)

    Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.

    2017-01-01

    The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.

  15. Monitoring Temperature and Fan Speed Using Ganglia and Winbond Chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaffrey, Cattie; /SLAC

    2006-09-27

    Effective monitoring is essential to keep a large group of machines, like the ones at Stanford Linear Accelerator Center (SLAC), up and running. SLAC currently uses Ganglia Monitoring System to observe about 2000 machines, analyzing metrics like CPU usage and I/O rate. However, metrics essential to machine hardware health, such as temperature and fan speed, are not being monitored. Many machines have a Winbond w83782d chip which monitors three temperatures, two of which come from dual CPUs, and returns the information when the sensor command is invoked. Ganglia also provides a feature, gmetric, that allows the users to monitor theirmore » own metrics and incorporate them into the monitoring system. The programming language Perl is chosen to implement a script that invokes the sensors command, extracts the temperature and fan speed information, and calls gmetric with the appropriate arguments. Two machines were used to test the script; the two CPUs on each machine run at about 65 Celsius, which is well within the operating temperature range (The maximum safe temperature range is 77-82 Celsius for the Pentium III processors being used). Installing the script on all machines with a Winbond w83782d chip allows the SLAC Scientific Computing and Computing Services group (SCCS) to better evaluate current cooling methods.« less

  16. Project Report: Automatic Sequence Processor Software Analysis

    NASA Technical Reports Server (NTRS)

    Benjamin, Brandon

    2011-01-01

    The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.

  17. All-silicon tandem solar cells: Practical limits for energy conversion and possible routes for improvement

    NASA Astrophysics Data System (ADS)

    Jia, Xuguang; Puthen-Veettil, Binesh; Xia, Hongze; Yang, Terry Chien-Jen; Lin, Ziyun; Zhang, Tian; Wu, Lingfeng; Nomoto, Keita; Conibeer, Gavin; Perez-Wurfl, Ivan

    2016-06-01

    Silicon nanocrystals (Si NCs) embedded in a dielectric matrix is regarded as one of the most promising materials for the third generation photovoltaics, owing to their tunable bandgap that allows fabrication of optimized tandem devices. Previous work has demonstrated fabrication of Si NCs based tandem solar cells by sputter-annealing of thin multi-layers of silicon rich oxide and SiO2. However, these device efficiencies were much lower than expected given that their theoretical values are much higher. Thus, it is necessary to understand the practical conversion efficiency limits for these devices. In this article, practical efficiency limits of Si NC based double junction tandem cells determined by fundamental material properties such as minority carrier, mobility, and lifetime are investigated. The practical conversion efficiency limits for these devices are significantly different from the reported efficiency limits which use Shockley-Queisser assumptions. Results show that the practical efficiency limit of a double junction cell (1.6 eV Si NC top cell and a 25% efficient c-Si PERL cell as the bottom cell) is 32%. Based on these results suggestions for improvement to the performance of Si nanocrystal based tandem solar cells in terms of the different parameters that were simulated are presented.

  18. Observing proposals on the Web at the National Optical Astronomy Observatories

    NASA Astrophysics Data System (ADS)

    Pilachowski, Catherine A.; Barnes, Jeannette; Bell, David J.

    1998-07-01

    Proposals for telescope time at facilities available through the National Optical Astronomy Observatories can now be prepared and submitted via the WWW. Investigators submit proposal information through a series of HTML forms to the NOAO server, where the information is processed by Perl CGI scripts. PostScript figures and ASCII files may be attached by investigators for inclusion in their proposals using their browser's upload feature. Proposal information is saved on the server so that investigators can return in later sessions to continue work on a proposal and so that collaborators can participate in writing the proposal if they have access to the proposal account name and password. The system provides on-line verification of LATEX syntax and a spellchecker, and confirms that all sections of the proposal are filled out. Users can request a LATEX or PostScript copy of their proposal by e-mail, or view the proposal on line. The advantages of the Web-based process for our users are convenience, access to on-line documentation, and the simple interface which avoids direct confrontation with LATEX. From the NOAO point of view, the advantage is the use of standardized formats and syntax, particularly as we begin to receive proposals for the Gemini telescopes and some independent observatories.

  19. PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.

    PubMed

    Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza

    2014-12-01

    The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.

  20. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.

  1. FBIS: A regional DNA barcode archival & analysis system for Indian fishes

    PubMed Central

    Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar

    2012-01-01

    DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. Availability The database is available for free at http://mail.nbfgr.res.in/fbis/ PMID:22715304

  2. UUCD: a family-based database of ubiquitin and ubiquitin-like conjugation.

    PubMed

    Gao, Tianshun; Liu, Zexian; Wang, Yongbo; Cheng, Han; Yang, Qing; Guo, Anyuan; Ren, Jian; Xue, Yu

    2013-01-01

    In this work, we developed a family-based database of UUCD (http://uucd.biocuckoo.org) for ubiquitin and ubiquitin-like conjugation, which is one of the most important post-translational modifications responsible for regulating a variety of cellular processes, through a similar E1 (ubiquitin-activating enzyme)-E2 (ubiquitin-conjugating enzyme)-E3 (ubiquitin-protein ligase) enzyme thioester cascade. Although extensive experimental efforts have been taken, an integrative data resource is still not available. From the scientific literature, 26 E1s, 105 E2s, 1003 E3s and 148 deubiquitination enzymes (DUBs) were collected and classified into 1, 3, 19 and 7 families, respectively. To computationally characterize potential enzymes in eukaryotes, we constructed 1, 1, 15 and 6 hidden Markov model (HMM) profiles for E1s, E2s, E3s and DUBs at the family level, separately. Moreover, the ortholog searches were conducted for E3 and DUB families without HMM profiles. Then the UUCD database was developed with 738 E1s, 2937 E2s, 46 631 E3s and 6647 DUBs of 70 eukaryotic species. The detailed annotations and classifications were also provided. The online service of UUCD was implemented in PHP + MySQL + JavaScript + Perl.

  3. Modernizing the MagIC Paleomagnetic and Rock Magnetic Database Technology Stack to Encourage Code Reuse and Reproducible Science

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2016-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.

  4. SNPGenie: estimating evolutionary parameters to detect natural selection using pooled next-generation sequencing data.

    PubMed

    Nelson, Chase W; Moncla, Louise H; Hughes, Austin L

    2015-11-15

    New applications of next-generation sequencing technologies use pools of DNA from multiple individuals to estimate population genetic parameters. However, no publicly available tools exist to analyse single-nucleotide polymorphism (SNP) calling results directly for evolutionary parameters important in detecting natural selection, including nucleotide diversity and gene diversity. We have developed SNPGenie to fill this gap. The user submits a FASTA reference sequence(s), a Gene Transfer Format (.GTF) file with CDS information and a SNP report(s) in an increasing selection of formats. The program estimates nucleotide diversity, distance from the reference and gene diversity. Sites are flagged for multiple overlapping reading frames, and are categorized by polymorphism type: nonsynonymous, synonymous, or ambiguous. The results allow single nucleotide, single codon, sliding window, whole gene and whole genome/population analyses that aid in the detection of positive and purifying natural selection in the source population. SNPGenie version 1.2 is a Perl program with no additional dependencies. It is free, open-source, and available for download at https://github.com/hugheslab/snpgenie. nelsoncw@email.sc.edu or austin@biol.sc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. BaCoCa--a heuristic software tool for the parallel assessment of sequence biases in hundreds of gene and taxon partitions.

    PubMed

    Kück, Patrick; Struck, Torsten H

    2014-01-01

    BaCoCa (BAse COmposition CAlculator) is a user-friendly software that combines multiple statistical approaches (like RCFV and C value calculations) to identify biases in aligned sequence data which potentially mislead phylogenetic reconstructions. As a result of its speed and flexibility, the program provides the possibility to analyze hundreds of pre-defined gene partitions and taxon subsets in one single process run. BaCoCa is command-line driven and can be easily integrated into automatic process pipelines of phylogenomic studies. Moreover, given the tab-delimited output style the results can be easily used for further analyses in programs like Excel or statistical packages like R. A built-in option of BaCoCa is the generation of heat maps with hierarchical clustering of certain results using R. As input files BaCoCa can handle FASTA and relaxed PHYLIP, which are commonly used in phylogenomic pipelines. BaCoCa is implemented in Perl and works on Windows PCs, Macs and Linux operating systems. The executable source code as well as example test files and a detailed documentation of BaCoCa are freely available at http://software.zfmk.de. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Dynamic Interactive Educational Diabetes Simulations Using the World Wide Web: An Experience of More Than 15 Years with AIDA Online.

    PubMed

    Lehmann, Eldon D; Dewolf, Dennis K; Novotny, Christopher A; Reed, Karen; Gotwals, Robert R

    2014-01-01

    Background. AIDA is a widely available downloadable educational simulator of glucose-insulin interaction in diabetes. Methods. A web-based version of AIDA was developed that utilises a server-based architecture with HTML FORM commands to submit numerical data from a web-browser client to a remote web server. AIDA online, located on a remote server, passes the received data through Perl scripts which interactively produce 24 hr insulin and glucose simulations. Results. AIDA online allows users to modify the insulin regimen and diet of 40 different prestored "virtual diabetic patients" on the internet or create new "patients" with user-generated regimens. Multiple simulations can be run, with graphical results viewed via a standard web-browser window. To date, over 637,500 diabetes simulations have been run at AIDA online, from all over the world. Conclusions. AIDA online's functionality is similar to the downloadable AIDA program, but the mode of implementation and usage is different. An advantage to utilising a server-based application is the flexibility that can be offered. New modules can be added quickly to the online simulator. This has facilitated the development of refinements to AIDA online, which have instantaneously become available around the world, with no further local downloads or installations being required.

  7. Lost in Translation: Bioinformatic Analysis of Variations Affecting the Translation Initiation Codon in the Human Genome.

    PubMed

    Abad, Francisco; de la Morena-Barrio, María Eugenia; Fernández-Breis, Jesualdo Tomás; Corral, Javier

    2018-06-01

    Translation is a key biological process controlled in eukaryotes by the initiation AUG codon. Variations affecting this codon may have pathological consequences by disturbing the correct initiation of translation. Unfortunately, there is no systematic study describing these variations in the human genome. Moreover, we aimed to develop new tools for in silico prediction of the pathogenicity of gene variations affecting AUG codons, because to date, these gene defects have been wrongly classified as missense. Whole-exome analysis revealed the mean of 12 gene variations per person affecting initiation codons, mostly with high (> 0:01) minor allele frequency (MAF). Moreover, analysis of Ensembl data (December 2017) revealed 11,261 genetic variations affecting the initiation AUG codon of 7,205 genes. Most of these variations (99.5%) have low or unknown MAF, probably reflecting deleterious consequences. Only 62 variations had high MAF. Genetic variations with high MAF had closer alternative AUG downstream codons than did those with low MAF. Besides, the high-MAF group better maintained both the signal peptide and reading frame. These differentiating elements could help to determine the pathogenicity of this kind of variation. Data and scripts in Perl and R are freely available at https://github.com/fanavarro/hemodonacion. jfernand@um.es. Supplementary data are available at Bioinformatics online.

  8. Quality Controlling CMIP datasets at GFDL

    NASA Astrophysics Data System (ADS)

    Horowitz, L. W.; Radhakrishnan, A.; Balaji, V.; Adcroft, A.; Krasting, J. P.; Nikonov, S.; Mason, E. E.; Schweitzer, R.; Nadeau, D.

    2017-12-01

    As GFDL makes the switch from model development to production in light of the Climate Model Intercomparison Project (CMIP), GFDL's efforts are shifted to testing and more importantly establishing guidelines and protocols for Quality Controlling and semi-automated data publishing. Every CMIP cycle introduces key challenges and the upcoming CMIP6 is no exception. The new CMIP experimental design comprises of multiple MIPs facilitating research in different focus areas. This paradigm has implications not only for the groups that develop the models and conduct the runs, but also for the groups that monitor, analyze and quality control the datasets before data publishing, before their knowledge makes its way into reports like the IPCC (Intergovernmental Panel on Climate Change) Assessment Reports. In this talk, we discuss some of the paths taken at GFDL to quality control the CMIP-ready datasets including: Jupyter notebooks, PrePARE, LAMP (Linux, Apache, MySQL, PHP/Python/Perl): technology-driven tracker system to monitor the status of experiments qualitatively and quantitatively, provide additional metadata and analysis services along with some in-built controlled-vocabulary validations in the workflow. In addition to this, we also discuss the integration of community-based model evaluation software (ESMValTool, PCMDI Metrics Package, and ILAMB) as part of our CMIP6 workflow.

  9. PANGEA: pipeline for analysis of next generation amplicons

    PubMed Central

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz FW; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-01-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including preprocessing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the χ2 step, are joined into one program called the ‘backbone’. PMID:20182525

  10. PANGEA: pipeline for analysis of next generation amplicons.

    PubMed

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz F W; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-07-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including pre-processing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the chi(2) step, are joined into one program called the 'backbone'.

  11. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  12. The Proteins API: accessing key integrated protein and genome information.

    PubMed

    Nightingale, Andrew; Antunes, Ricardo; Alpi, Emanuele; Bursteinas, Borisas; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd; Martin, Maria

    2017-07-03

    The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to 'talk' to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. The TJO-OAdM robotic observatory: OpenROCS and dome control

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Francisco, Xavier; Ribas, Ignasi; Casteels, Kevin; Martín, Jonatan

    2010-07-01

    The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.

  14. eHive: an artificial intelligence workflow system for genomic analysis.

    PubMed

    Severin, Jessica; Beal, Kathryn; Vilella, Albert J; Fitzgerald, Stephen; Schuster, Michael; Gordon, Leo; Ureta-Vidal, Abel; Flicek, Paul; Herrero, Javier

    2010-05-11

    The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/.

  15. Lawrence Livermore National Laboratory ULTRA-350 Test Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, D J; Wulff, T A; Carlisle, K

    2001-04-10

    LLNL has many in-house designed high precision machine tools. Some of these tools include the Large Optics Diamond Turning Machine (LODTM) [1], Diamond Turning Machine No.3 (DTM-3) and two Precision Engineering Research Lathes (PERL-1 and PERL-11). These machines have accuracy in the sub-micron range and in most cases position resolution in the couple of nanometers range. All of these machines are built with similar underlying technologies. The machines use capstan drive technology, laser interferometer position feedback, tachometer velocity feedback, permanent magnet (PM) brush motors and analog velocity and position loop servo compensation [2]. The machine controller does not perform anymore » servo compensation it simply computes the differences between the commanded position and the actual position (the following error) and sends this to a D/A for the analog servo position loop. LLNL is designing a new high precision diamond turning machine. The machine is called the ULTRA 350 [3]. In contrast to many of the proven technologies discussed above, the plan for the new machine is to use brushless linear motors, high precision linear scales, machine controller motor commutation and digital servo compensation for the velocity and position loops. Although none of these technologies are new and have been in use in industry, applications of these technologies to high precision diamond turning is limited. To minimize the risks of these technologies in the new machine design, LLNL has established a test bed to evaluate these technologies for application in high precision diamond turning. The test bed is primarily composed of commercially available components. This includes the slide with opposed hydrostatic bearings, the oil system, the brushless PM linear motor, the two-phase input three-phase output linear motor amplifier and the system controller. The linear scales are not yet commercially available but use a common electronic output format. As of this writing, the final verdict for the use of these technologies is still out but the first part of the work has been completed with promising results. The goal of this part of the work was to close a servo position loop around a slide incorporating these technologies and to measure the performance. This paper discusses the tests that were setup for system evaluation and the results of the measurements made. Some very promising results include; slide positioning to nanometer level and slow speed slide direction reversal at less than 100nm/min with no observed discontinuities. This is very important for machine contouring in diamond turning. As a point of reference, at 100 nm/min it would take the slide almost 7 years to complete the full designed travel of 350 mm. This speed has been demonstrated without the use of a velocity sensor. The velocity is derived from the position sensor. With what has been learned on the test bed, the paper finishes with a brief comparison of the old and new technologies. The emphasis of this comparison will be on the servo performance as illustrated with bode plot diagrams.« less

  16. Lawrence Livermore National Laboratory ULTRA-350 Test Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, D J; Wulff, T A; Carlisle, K

    2001-04-10

    LLNL has many in-house designed high precision machine tools. Some of these tools include the Large Optics Diamond Turning Machine (LODTM) [1], Diamond Turning Machine No.3 (DTM-3) and two Precision Engineering Research Lathes (PERL-I and PERL-II). These machines have accuracy in the sub-micron range and in most cases position resolution in the couple of nanometers range. All of these machines are built with similar underlying technologies. The machines use capstan drive technology, laser interferometer position feedback, tachometer velocity feedback, permanent magnet (PM) brush motors and analog velocity and position loop servo compensation [2]. The machine controller does not perform anymore » servo compensation it simply computes the differences between the commanded position and the actual position (the following error) and sends this to a D/A for the analog servo position loop. LLNL is designing a new high precision diamond turning machine. The machine is called the ULTRA 350 [3]. In contrast to many of the proven technologies discussed above, the plan for the new machine is to use brushless linear motors, high precision linear scales, machine controller motor commutation and digital servo compensation for the velocity and position loops. Although none of these technologies are new and have been in use in industry, applications of these technologies to high precision diamond turning is limited. To minimize the risks of these technologies in the new machine design, LLNL has established a test bed to evaluate these technologies for application in high precision diamond turning. The test bed is primarily composed of commercially available components. This includes the slide with opposed hydrostatic bearings, the oil system, the brushless PM linear motor, the two-phase input three-phase output linear motor amplifier and the system controller. The linear scales are not yet commercially available but use a common electronic output format. As of this writing, the final verdict for the use of these technologies is still out but the first part of the work has been completed with promising results. The goal of this part of the work was to close a servo position loop around a slide incorporating these technologies and to measure the performance. This paper discusses the tests that were setup for system evaluation and the results of the measurements made. Some very promising results include; slide positioning to nanometer level and slow speed slide direction reversal at less than 100nm/min with no observed discontinuities. This is very important for machine contouring in diamond turning. As a point of reference, at 100 nm/min it would take the slide almost 7 years to complete the full designed travel of 350 mm. This speed has been demonstrated without the use of a velocity sensor. The velocity is derived from the position sensor. With what has been learned on the test bed, the paper finishes with a brief comparison of the old and new technologies. The emphasis of this comparison will be on the servo performance as illustrated with bode plot diagrams.« less

  17. Inhibition of iron overload-induced apoptosis and necrosis of bone marrow mesenchymal stem cells by melatonin.

    PubMed

    Yang, Fan; Li, Yuan; Yan, Gege; Liu, Tianyi; Feng, Chao; Gong, Rui; Yuan, Ye; Ding, Fengzhi; Zhang, Lai; Idiiatullina, Elina; Pavlov, Valentin; Han, Zhenbo; Ma, Wenya; Huang, Qi; Yu, Ying; Bao, Zhengyi; Wang, Xiuxiu; Hua, Bingjie; Du, Zhimin; Cai, Benzhi; Yang, Lei

    2017-05-09

    Iron overload induces severe damage to several vital organs such as the liver, heart and bone, and thus contributes to the dysfunction of these organs. The aim of this study is to investigate whether iron overload causes the apoptosis and necrosis of bone marrow mesenchymal stem cells (BMSCs) and melatonin may prevent its toxicity. Perls' Prussion blue staining showed that exposure to increased concentrations of ferric ammonium citrate (FAC) induced a gradual increase of intracellular iron level in BMSCs. Trypan blue staining demonstrated that FAC decreased the viability of BMSCs in a concentration-dependent manner. Notably, melatonin protected BMSCs against apoptosis and necrosis induced by FAC and it was vertified by Live/Dead, TUNEL and PI/Hoechst stainings. Furthermore, melatonin pretreatment suppressed FAC-induced reactive oxygen species accumulation. Western blot showed that exposure to FAC resulted in the decrease of anti-apoptotic protein Bcl-2 and the increase of pro-apoptotic protein Bax and Cleaved Caspase-3, and necrosis-related proteins RIP1 and RIP3, which were significantly inhibited by melatonin treatment. At last, melatonin receptor blocker luzindole failed to block the protection of BMSCs apoptosis and necrosis by melatonin. Taken together, melatonin protected BMSCs from iron overload induced apoptosis and necrosis by regulating Bcl-2, Bax, Cleaved Caspase-3, RIP1 and RIP3 pathways.

  18. Particles and forces. At the heart of matter. Readings from Scientific American magazine.

    NASA Astrophysics Data System (ADS)

    Carrigan, R. A., Jr.; Trower, W. P.

    In this volume a selection of Scientific American articles chronicles the most recent developments in particle physics. In these twelve articles, distinguished physicists look at the tools, ideas, and experiments that shed light on events at the early moments of the universe, as well as the increasingly sophisticated instruments that will make further developments possible in the years to come. For the companion volume Particle physics in the cosmos see 49.003.059. Contents: Introduction. I. Ideas. 1. Elementary particles and forces (C. Quigg). 2. Quarks with color and flavor (S. L. Glashow). 3. The lattice theory of quark confinement (C. Rebbi). Postscript to Ideas (C. Quigg). II. Tools. 4. The next generation of particle accelerators (R. R. Wilson). 5. The Superconducting Super Collider (J. D. Jackson, M. Tigner, S. Wojcicki). Postscript to Tools (R. A. Carrigan Jr.). III. Weak interactions. 6. Heavy leptons (M. L. Perl, W. T. Kirk). 7. The search for intermediate vector bosons (D. B. Cline, C. Rubbia, S. van der Meer). IV. Strong interactions. 8. The Upsilon particle (L. M. Lederman). 9. Quarkonium (E. D. Bloom, G. J. Feldman). 10. Particles with naked beauty (N. B. Mistry, R. A. Poling, E. H. Thorndike). V. Now and beyond. 11. Superstrings (M. B. Green). 12. The structure of quarks and leptons (H. Harari). Postscript to Now and beyond (R. A. Carrigan Jr., W. P. Trower).

  19. Masking as an effective quality control method for next-generation sequencing data analysis.

    PubMed

    Yun, Sajung; Yun, Sijung

    2014-12-13

    Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).

  20. PuReD-MCL: a graph-based PubMed document clustering methodology.

    PubMed

    Theodosiou, T; Darzentas, N; Angelis, L; Ouzounis, C A

    2008-09-01

    Biomedical literature is the principal repository of biomedical knowledge, with PubMed being the most complete database collecting, organizing and analyzing such textual knowledge. There are numerous efforts that attempt to exploit this information by using text mining and machine learning techniques. We developed a novel approach, called PuReD-MCL (Pubmed Related Documents-MCL), which is based on the graph clustering algorithm MCL and relevant resources from PubMed. PuReD-MCL avoids using natural language processing (NLP) techniques directly; instead, it takes advantage of existing resources, available from PubMed. PuReD-MCL then clusters documents efficiently using the MCL graph clustering algorithm, which is based on graph flow simulation. This process allows users to analyse the results by highlighting important clues, and finally to visualize the clusters and all relevant information using an interactive graph layout algorithm, for instance BioLayout Express 3D. The methodology was applied to two different datasets, previously used for the validation of the document clustering tool TextQuest. The first dataset involves the organisms Escherichia coli and yeast, whereas the second is related to Drosophila development. PuReD-MCL successfully reproduces the annotated results obtained from TextQuest, while at the same time provides additional insights into the clusters and the corresponding documents. Source code in perl and R are available from http://tartara.csd.auth.gr/~theodos/

  1. eHive: An Artificial Intelligence workflow system for genomic analysis

    PubMed Central

    2010-01-01

    Background The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. Results We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. Conclusions eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/. PMID:20459813

  2. Disaster behavioral health capacity: Findings from a multistate preparedness assessment.

    PubMed

    Peck, Megan; Mendenhall, Tai; Stenberg, Louise; Carlson, Nancy; Olson, Debra K

    2016-01-01

    To identify gaps in disaster behavioral health, the Preparedness and Emergency Response Learning Center (PERL) at the University of Minnesota's School of Public Health supported the development and implementation of a multistate disaster behavioral health preparedness assessment. Information was gathered regarding worker knowledge of current disaster behavioral health capacity at the state and local level, and perceived disaster behavioral health training needs and preferences. Between May and July 2015, 143 participants completed a 31-item uniform questionnaire over the telephone by a trained interviewer. Trained interviewers were given uniform instructions on administering the questionnaire. Participants included county and city-level public health leaders and directors from Minnesota, Wisconsin, and North Dakota. Findings demonstrate that across the three states there is a need for improved disaster behavioral health training and response plans for before, during, and after public health emergencies. This study identified perceived gaps in plans and procedures for meeting the disaster behavioral health needs of different atrisk populations, including children, youth, and those with mental illness. There was consistent agreement among participants about the lack of behavioral health coordination between agencies during emergency events. Findings can be used to inform policy and the development of trainings for those involved in disaster behavioral health. Effectively attending to interagency coordination and mutual aid agreements, planning for effective response and care for vulnerable populations, and targeted training will contribute to a more successful public health response to emergency events.

  3. Dual Target Design for CLAS12

    NASA Astrophysics Data System (ADS)

    Alam, Omair; Gilfoyle, Gerard; Christo, Steve

    2015-10-01

    An experiment to measure the neutron magnetic form factor (GnM) is planned for the new CLAS12 detector in Hall B at Jefferson Lab. This form factor will be extracted from the ratio of the quasielastic electron-neutron to electron-proton scattering off a liquid deuterium (LD2) target. A collinear liquid hydrogen (LH2) target will be used to measure efficiencies at the same time as production data is collected from the LD2 target. To test target designs we have simulated CLAS12 and the target geometry. Electron-nucleon events are produced first with the QUasiElastic Event Generator (QUEEG) which models the internal motion of the nucleons in deuterium.1 The results are used as input to the CLAS12 Monte Caro code gemc; a Geant4-based program that simulates the particle's interactions with each component of CLAS12 including the target material. The dual target geometry has been added to gemc including support structures and cryogenic transport systems. A Perl script was written to define the target materials and geometries. The output of the script is a set of database entries read by gemc at runtime. An initial study of the impact of this dual-target structure revealed limited effects on the electron momentum and angular resolutions. Work supported by the University of Richmond and the US Department of Energy.

  4. PanWeb: A web interface for pan-genomic analysis.

    PubMed

    Pantoja, Yan; Pinheiro, Kenny; Veras, Allan; Araújo, Fabrício; Lopes de Sousa, Ailton; Guimarães, Luis Carlos; Silva, Artur; Ramos, Rommel T J

    2017-01-01

    With increased production of genomic data since the advent of next-generation sequencing (NGS), there has been a need to develop new bioinformatics tools and areas, such as comparative genomics. In comparative genomics, the genetic material of an organism is directly compared to that of another organism to better understand biological species. Moreover, the exponentially growing number of deposited prokaryote genomes has enabled the investigation of several genomic characteristics that are intrinsic to certain species. Thus, a new approach to comparative genomics, termed pan-genomics, was developed. In pan-genomics, various organisms of the same species or genus are compared. Currently, there are many tools that can perform pan-genomic analyses, such as PGAP (Pan-Genome Analysis Pipeline), Panseq (Pan-Genome Sequence Analysis Program) and PGAT (Prokaryotic Genome Analysis Tool). Among these software tools, PGAP was developed in the Perl scripting language and its reliance on UNIX platform terminals and its requirement for an extensive parameterized command line can become a problem for users without previous computational knowledge. Thus, the aim of this study was to develop a web application, known as PanWeb, that serves as a graphical interface for PGAP. In addition, using the output files of the PGAP pipeline, the application generates graphics using custom-developed scripts in the R programming language. PanWeb is freely available at http://www.computationalbiology.ufpa.br/panweb.

  5. PubMed Central

    DOBRETSOV, K.; STOLYAR, S.

    2015-01-01

    SUMMARY Herein we examined the toxicity, penetration properties and ability of Fe2O3·nH2O magnetic nanoparticles extracted from silt of the Borovoye Lake (Krasnoyarsk, Russia) to bind an antibiotic. Experimental studies were carried out using magnetic nanoparticles alone and after antibiotic exposure in tissue samples from nasal mucosa, cartilage and bone (in vitro). Toxicity of particles was studied in laboratory animals (in vivo). Tissues removed at endonasal surgery (nasal mucosa, cartilage and bone of the nasal septum) were placed in solution containing nanoparticles and exposed to a magnetic field. Distribution of nanoparticles was determined by Perls' reaction. After intravenous injection, possible toxic effects of injected nanoparticles on the organs and tissues of rats were evaluated by histological examination. Binding between the nanoparticles and antibiotic (amoxicillin clavulanate) was studied using infrared spectroscopy. In 30 in vitro experiments, magnetisation of Fe2O3·nH2O nanoparticles resulted in their diffuse infiltration into the mucosa, cartilage and bone tissue of the nose and paranasal sinuses. Intravenous injection of 0.2 ml of magnetic nanoparticles into the rat's tail vein did not result in any changes in parenchymatous organs, and the nanoparticles were completely eliminated from the body within 24 hours. The interaction of nanoparticles with amoxicillin clavulanate was demonstrated by infrared spectroscopy. Positive results of experimental studies provide a basis for further clinical investigations of these magnetic nanoparticles and their use in otorhinolaryngology. PMID:26019393

  6. HFE gene variants and iron-induced oxygen radical generation in idiopathic pulmonary fibrosis.

    PubMed

    Sangiuolo, Federica; Puxeddu, Ermanno; Pezzuto, Gabriella; Cavalli, Francesco; Longo, Giuliana; Comandini, Alessia; Di Pierro, Donato; Pallante, Marco; Sergiacomi, Gianluigi; Simonetti, Giovanni; Zompatori, Maurizio; Orlandi, Augusto; Magrini, Andrea; Amicosante, Massimo; Mariani, Francesca; Losi, Monica; Fraboni, Daniela; Bisetti, Alberto; Saltini, Cesare

    2015-02-01

    In idiopathic pulmonary fibrosis (IPF), lung accumulation of excessive extracellular iron and macrophage haemosiderin may suggest disordered iron homeostasis leading to recurring microscopic injury and fibrosing damage. The current study population comprised 89 consistent IPF patients and 107 controls. 54 patients and 11 controls underwent bronchoalveolar lavage (BAL). Haemosiderin was assessed by Perls' stain, BAL fluid malondialdehyde (MDA) by high-performance liquid chromatography, BAL cell iron-dependent oxygen radical generation by fluorimetry and the frequency of hereditary haemochromatosis HFE gene variants by reverse dot blot hybridisation. Macrophage haemosiderin, BAL fluid MDA and BAL cell unstimulated iron-dependent oxygen radical generation were all significantly increased above controls (p<0.05). The frequency of C282Y, S65C and H63D HFE allelic variants was markedly higher in IPF compared with controls (40.4% versus 22.4%, OR 2.35, p=0.008) and was associated with higher iron-dependent oxygen radical generation (HFE variant 107.4±56.0, HFE wild type (wt) 59.4±36.4 and controls 16.7±11.8 fluorescence units per 10(5) BAL cells; p=0.028 HFE variant versus HFE wt, p=0.006 HFE wt versus controls). The data suggest iron dysregulation associated with HFE allelic variants may play an important role in increasing susceptibility to environmental exposures, leading to recurring injury and fibrosis in IPF. Copyright ©ERS 2015.

  7. IPython: components for interactive and parallel computing across disciplines. (Invited)

    NASA Astrophysics Data System (ADS)

    Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.

    2013-12-01

    Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.

  8. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases

    PubMed Central

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-01-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  9. AgdbNet – antigen sequence database software for bacterial typing

    PubMed Central

    Jolley, Keith A; Maiden, Martin CJ

    2006-01-01

    Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated. PMID:16790057

  10. The application of moving bed biofilm reactor to denitrification process after trickling filters.

    PubMed

    Kopec, Lukasz; Drewnowski, Jakub; Kopec, Adam

    2016-12-01

    The paper presents research of a prototype moving bed biofilm reactor (MBBR). The device was used for the post-denitrification process and was installed at the end of a technological system consisting of a septic tank and two trickling filters. The concentrations of suspended biomass and biomass attached on the EvU Perl moving bed surface were determined. The impact of the external organic carbon concentration on the denitrification rate and efficiency of total nitrogen removal was also examined. The study showed that the greater part of the biomass was in the suspended form and only 6% of the total biomass was attached to the surface of the moving bed. Abrasion forces between carriers of the moving bed caused the fast stripping of attached microorganisms and formation of flocs. Thanks to immobilization of a small amount of biomass, the MBBR was less prone to leaching of the biomass and the occurrence of scum and swelling sludge. It was revealed that the maximum rate of denitrification was an average of 0.73 gN-NO 3 /gDM·d (DM: dry matter), and was achieved when the reactor was maintained in external organic carbon concentration exceeding 300 mgO 2 /dm 3 chemical oxygen demand. The reactor proved to be an effective device enabling the increase of total nitrogen removal from 53.5% to 86.0%.

  11. Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness

    DOE PAGES

    Vollmer, Todd; Manic, Milos; Linda, Ondrej

    2013-06-01

    The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less

  12. ConsPred: a rule-based (re-)annotation framework for prokaryotic genomes.

    PubMed

    Weinmaier, Thomas; Platzer, Alexander; Frank, Jeroen; Hellinger, Hans-Jörg; Tischler, Patrick; Rattei, Thomas

    2016-11-01

    The rapidly growing number of available prokaryotic genome sequences requires fully automated and high-quality software solutions for their initial and re-annotation. Here we present ConsPred, a prokaryotic genome annotation framework that performs intrinsic gene predictions, homology searches, predictions of non-coding genes as well as CRISPR repeats and integrates all evidence into a consensus annotation. ConsPred achieves comprehensive, high-quality annotations based on rules and priorities, similar to decision-making in manual curation and avoids conflicting predictions. Parameters controlling the annotation process are configurable by the user. ConsPred has been used in the institutions of the authors for longer than 5 years and can easily be extended and adapted to specific needs. The ConsPred algorithm for producing a consensus from the varying scores of multiple gene prediction programs approaches manual curation in accuracy. Its rule-based approach for choosing final predictions avoids overriding previous manual curations. ConsPred is implemented in Java, Perl and Shell and is freely available under the Creative Commons license as a stand-alone in-house pipeline or as an Amazon Machine Image for cloud computing, see https://sourceforge.net/projects/conspred/. thomas.rattei@univie.ac.atSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. An Open-source Meteorological Operational System and its Installation in Portuguese- speaking Countries

    NASA Astrophysics Data System (ADS)

    Almeida, W. G.; Ferreira, A. L.; Mendes, M. V.; Ribeiro, A.; Yoksas, T.

    2007-05-01

    CPTEC, a division of Brazil’s INPE, has been using several open-source software packages for a variety of tasks in its Data Division. Among these tools are ones traditionally used in research and educational communities such as GrADs (Grid Analysis and Display System from the Center for Ocean-Land-Atmosphere Studies (COLA)), the Local Data Manager (LDM) and GEMPAK (from Unidata), andl operational tools such the Automatic File Distributor (AFD) that are popular among National Meteorological Services. In addition, some tools developed locally at CPTEC are also being made available as open-source packages. One package is being used to manage the data from Automatic Weather Stations that INPE operates. This system uses only open- source tools such as MySQL database, PERL scripts and Java programs for web access, and Unidata’s Internet Data Distribution (IDD) system and AFD for data delivery. All of these packages are get bundled into a low-cost and easy to install and package called the Meteorological Data Operational System. Recently, in a cooperation with the SICLIMAD project, this system has been modified for use by Portuguese- speaking countries in Africa to manage data from many Automatic Weather Stations that are being installed in these countries under SICLIMAD sponsorship. In this presentation we describe the tools included-in and and architecture-of the Meteorological Data Operational System.

  14. VizieR Online Data Catalog: FAMA code for stellar parameters and abundances (Magrini+, 2013)

    NASA Astrophysics Data System (ADS)

    Magrini, L.; Randich, S.; Friel, E.; Spina, L.; Jacobson, H.; Cantat-Gaudin, T.; Donati, P.; Baglioni, R.; Maiorca, E.; Bragaglia, A.; Sordo, R.; Vallenari, A.

    2013-07-01

    FAMA v.1, July 2013, distributed with MOOGv2013 and Kurucz models. Perl Codes: read_out2.pl read_final.pl driver.pl sclipping_26.0.pl sclipping_final.pl sclipping_26.1.pl confronta.pl fama.pl Model atmopheres and interpolator (Kurucz models): MODEL_ATMO MOOG_files: files to compile MOOG (the most recent version of MOOG can be obtained from http://www.as.utexas.edu/~chris/moog.html) FAMAmoogfiles: files to update when compiling MOOG OUTPUT: directory in which the results will be stored, contains a sm macro to produce final plots automoog.par: files with parameters for FAMA 1) OUTPUTdir 2) MOOGdir 3) modelsdir 4) 1.0 (default) percentage of the dispersion of FeI abundances to be considered to compute the errors on the stellar parameters, 1.0 means 100%, thus to compute e.g., the error on Teff we allow to code to find the Teff corresponding to a slope given by σ(FeI)/range(EP). 5) 1.2 (default) σ clipping for FeI lines 6) 1.0 (default) σ clipping for FeII lines 7) 1.0 (default) σ clipping for the other elements 8) 1.0 (default) value of the QP parameter, higher values mean less strong convergence criteria. star.iron: EWs in the correct format to test the code sun.par: initial parameters for the test (1 data file).

  15. Simplifying the Analysis of Data from Multiple Heliophysics Instruments and Missions

    NASA Astrophysics Data System (ADS)

    Bazell, D.; Vandegriff, J. D.

    2014-12-01

    Understanding the intertwined plasma, particles and fields connecting the Sun and the Earth requires combining data from many diverse sources, but there are still many technological barriers that complicate the merging of data from different instruments and missions. We present an emerging data serving capability that provides a uniform way to access heterogeneous and distributed data. The goal of our data server is to provide a standardized data access mechanism that is identical for data of any format and layout (CDF, custom binary, FITS, netCDF, CSV and other flavors of ASCII, etc). Data remain in their original format and location (i.e., at instrument team sites or existing data centers), and our data server delivers a dynamically reformatted view of the data. Scientists can then use tools (clients that talk to the server) that offer a single interface for browsing, analyzing or downloading many different contemporary and legacy heliophysics data sets. Our current server accesses many CDF data resources at CDAWeb, as well as multiple other instrument team sites. Our webservice will be deployed on the Amazon Cloud at http://datashop.elasticbeanstalk.com/. Two basic clients will also be demonstrated: one in Java and one in IDL. Python, Perl, and Matlab clients are also planned. Complex missions such as Solar Orbiter and Solar Probe Plus will benefit greatly from tools that enable multi-instrument and multi-mission data comparison.

  16. Light trapping and electrical transport in thin-film solar cells with randomly rough textures

    NASA Astrophysics Data System (ADS)

    Kowalczewski, Piotr; Bozzola, Angelo; Liscidini, Marco; Claudio Andreani, Lucio

    2014-05-01

    Using rigorous electro-optical calculations, we predict a significant efficiency enhancement in thin-film crystalline silicon (c-Si) solar cells with rough interfaces. We show that an optimized rough texture allows one to reach the Lambertian limit of absorption in a wide absorber thickness range from 1 to 100 μm. The improvement of efficiency due to the roughness is particularly substantial for thin cells, for which light trapping is crucial. We consider Auger, Shockley-Read-Hall (SRH), and surface recombination, quantifying the importance of specific loss mechanisms. When the cell performance is limited by intrinsic Auger recombination, the efficiency of 24.4% corresponding to the wafer-based PERL cell can be achieved even if the absorber thickness is reduced from 260 to 10 μm. For cells with material imperfections, defect-based SRH recombination contributes to the opposite trends of short-circuit current and open-circuit voltage as a function of the absorber thickness. By investigating a wide range of SRH parameters, we determine an optimal absorber thickness as a function of material quality. Finally, we show that the efficiency enhancement in textured cells persists also in the presence of surface recombination. Indeed, in our design the efficiency is limited by recombination at the rear (silicon absorber/back reflector) interface, and therefore it is possible to engineer the front surface to a large extent without compromising on efficiency.

  17. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments

    PubMed Central

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    Motivation: With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. Results: We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. Availability and implementation: MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator. The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. Contact: igs@sanger.ac.uk or mh26@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27605100

  18. LightWAVE: Waveform and Annotation Viewing and Editing in a Web Browser.

    PubMed

    Moody, George B

    2013-09-01

    This paper describes LightWAVE, recently-developed open-source software for viewing ECGs and other physiologic waveforms and associated annotations (event markers). It supports efficient interactive creation and modification of annotations, capabilities that are essential for building new collections of physiologic signals and time series for research. LightWAVE is constructed of components that interact in simple ways, making it straightforward to enhance or replace any of them. The back end (server) is a common gateway interface (CGI) application written in C for speed and efficiency. It retrieves data from its data repository (PhysioNet's open-access PhysioBank archives by default, or any set of files or web pages structured as in PhysioBank) and delivers them in response to requests generated by the front end. The front end (client) is a web application written in JavaScript. It runs within any modern web browser and does not require installation on the user's computer, tablet, or phone. Finally, LightWAVE's scribe is a tiny CGI application written in Perl, which records the user's edits in annotation files. LightWAVE's data repository, back end, and front end can be located on the same computer or on separate computers. The data repository may be split across multiple computers. For compatibility with the standard browser security model, the front end and the scribe must be loaded from the same domain.

  19. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments.

    PubMed

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. igs@sanger.ac.uk or mh26@sanger.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. footprintDB: a database of transcription factors with annotated cis elements and binding interfaces.

    PubMed

    Sebastian, Alvaro; Contreras-Moreira, Bruno

    2014-01-15

    Traditional and high-throughput techniques for determining transcription factor (TF) binding specificities are generating large volumes of data of uneven quality, which are scattered across individual databases. FootprintDB integrates some of the most comprehensive freely available libraries of curated DNA binding sites and systematically annotates the binding interfaces of the corresponding TFs. The first release contains 2422 unique TF sequences, 10 112 DNA binding sites and 3662 DNA motifs. A survey of the included data sources, organisms and TF families was performed together with proprietary database TRANSFAC, finding that footprintDB has a similar coverage of multicellular organisms, while also containing bacterial regulatory data. A search engine has been designed that drives the prediction of DNA motifs for input TFs, or conversely of TF sequences that might recognize input regulatory sequences, by comparison with database entries. Such predictions can also be extended to a single proteome chosen by the user, and results are ranked in terms of interface similarity. Benchmark experiments with bacterial, plant and human data were performed to measure the predictive power of footprintDB searches, which were able to correctly recover 10, 55 and 90% of the tested sequences, respectively. Correctly predicted TFs had a higher interface similarity than the average, confirming its diagnostic value. Web site implemented in PHP,Perl, MySQL and Apache. Freely available from http://floresta.eead.csic.es/footprintdb.

  1. Development of cleaved amplified polymorphic sequence markers and a CAPS-based genetic linkage map in watermelon (Citrullus lanatus [Thunb.] Matsum. and Nakai) constructed using whole-genome re-sequencing data

    PubMed Central

    Liu, Shi; Gao, Peng; Zhu, Qianglong; Luan, Feishi; Davis, Angela R.; Wang, Xiaolu

    2016-01-01

    Cleaved amplified polymorphic sequence (CAPS) markers are useful tools for detecting single nucleotide polymorphisms (SNPs). This study detected and converted SNP sites into CAPS markers based on high-throughput re-sequencing data in watermelon, for linkage map construction and quantitative trait locus (QTL) analysis. Two inbred lines, Cream of Saskatchewan (COS) and LSW-177 had been re-sequenced and analyzed by Perl self-compiled script for CAPS marker development. 88.7% and 78.5% of the assembled sequences of the two parental materials could map to the reference watermelon genome, respectively. Comparative assembled genome data analysis provided 225,693 and 19,268 SNPs and indels between the two materials. 532 pairs of CAPS markers were designed with 16 restriction enzymes, among which 271 pairs of primers gave distinct bands of the expected length and polymorphic bands, via PCR and enzyme digestion, with a polymorphic rate of 50.94%. Using the new CAPS markers, an initial CAPS-based genetic linkage map was constructed with the F2 population, spanning 1836.51 cM with 11 linkage groups and 301 markers. 12 QTLs were detected related to fruit flesh color, length, width, shape index, and brix content. These newly CAPS markers will be a valuable resource for breeding programs and genetic studies of watermelon. PMID:27162496

  2. Anthrax biosensor, protective antigen ion channel asymmetric blockade.

    PubMed

    Halverson, Kelly M; Panchal, Rekha G; Nguyen, Tam L; Gussio, Rick; Little, Stephen F; Misakian, Martin; Bavari, Sina; Kasianowicz, John J

    2005-10-07

    The significant threat posed by biological agents (e.g. anthrax, tetanus, botulinum, and diphtheria toxins) (Inglesby, T. V., O'Toole, T., Henderson, D. A., Bartlett, J. G., Ascher, M. S., Eitzen, E., Friedlander, A. M., Gerberding, J., Hauer, J., Hughes, J., McDade, J., Osterholm, M. T., Parker, G., Perl, T. M., Russell, P. K., and Tonat, K. (2002) J. Am. Med. Assoc. 287, 2236-2252) requires innovative technologies and approaches to understand the mechanisms of toxin action and to develop better therapies. Anthrax toxins are formed from three proteins secreted by fully virulent Bacillus anthracis, protective antigen (PA, 83 kDa), lethal factor (LF, 90 kDa), and edema factor (EF, 89 kDa). Here we present electrophysiological measurements demonstrating that full-length LF and EF convert the current-voltage relationship of the heptameric PA63 ion channel from slightly nonlinear to highly rectifying and diode-like at pH 6.6. This effect provides a novel method for characterizing functional toxin interactions. The method confirms that a previously well characterized PA63 monoclonal antibody, which neutralizes anthrax lethal toxin in animals in vivo and in vitro, prevents the binding of LF to the PA63 pore. The technique can also detect the presence of anthrax lethal toxin complex from plasma of infected animals. The latter two results suggest the potential application of PA63 nanopore-based biosensors in anthrax therapeutics and diagnostics.

  3. Melanins as biomarkers of ovarian follicular atresia in the catfish Heteropneustes fossilis: biochemical and histochemical characterization, seasonal variation and hormone effects.

    PubMed

    Kumar, Ravi; Joy, Keerikkattil P

    2015-06-01

    Follicular atresia is a common feature of the vertebrate ovary that occurs at different stages of folliculogenesis and ovarian regression. It has physiological significance to maintain homeostasis and control fecundity, and ensure removal of post-ovulatory follicular remnants for preparing the ovary for the next cycle. Pigments appear late in the atretic process as indigestible waste formed out of the degradation of the oocytes, follicle wall and granulocytes. In the present study, pigment accumulation was demonstrated by Schmorl's and Perls' staining methods in the atretic ovarian follicles of Heteropneustes fossilis during follicular development and regression. Melanins were characterized spectrophotometrically for the first time in fish ovary. The predominant form is eumelanin, followed by pheomelanin and alkali-soluble melanin. Melanins showed significant seasonal variations with levels low in gonad resting phase, increasing to the peak in the post-spawning phase. The concentration of melanins increased time-dependently in post-ovulated ovary after human chorionic gonadotropin treatment. In the spawning phase, in vitro incubation of ovary slices with estradiol-17β or dexamethasone for 8 or 16 h decreased both eumelanin and pheomelanin levels time-dependently. The alkali-soluble melanin showed a significant decrease only in the dexamethasone group at 16 h. The results show that melanin assay can be used as a biomarker of follicular atresia in fish ovary, natural or induced by environmental toxicants.

  4. [Effects of nano-lead exposure on learning and memory as well as iron homeostasis in brain of offspring rats].

    PubMed

    Gao, Jing; Su, Hong; Yin, Jingwen; Cao, Fuyuan; Feng, Peipei; Liu, Nan; Xue, Ling; Zheng, Guoying; Li, Qingzhao; Zhang, Yanshu

    2015-06-01

    To investigate the effects of nano-lead exposure on learning and memory and iron homeostasis in the brain of the offspring rats on postnatal day 21 (PND21) and postnatal day 42 (PND42). Twenty adult pregnant female Sprague-Dawley rats were randomly divided into control group and nano-lead group. Rats in the nano-lead group were orally administrated 10 mg/kg nano-lead, while rats in the control group were administrated an equal volume of normal saline until PND21. On PND21, the offspring rats were weaned and given the same treatment as the pregnant rats until 42 days after birth. The learning and memory ability of offspring rats on PND21 and PND42 was evaluated by Morris water maze test. The hippocampus and cortex s amples of offspring rats on PND21 and PND42 were collected to determine iron and lead levels in the hippocampus and cortex by inductively coupled plasma-mass spectrometry. The distributions of iron in the hippocampus and cortex were observed by Perl's iron staining. The expression levels of ferritin, ferroportin 1 (FPN1), hephaestin (HP), and ceruloplasmin (CP) were measured by enzyme-linked immunosorbent assay. After nano-lead exposure, the iron content in the cortex of offspring rats on PND21 and PND42 in the nano-lead group was significantly higher than those in the control group (32.63 ± 6.03 µg/g vs 27.04 ± 5.82 µg/g, P<0.05; 46.20 ±10.60 µg/g vs 36.61 ± 10.2µg/g, P<0.05). The iron content in the hippocampus of offspring rats on PND42 in the nano-lead group was significantly higher than that in the control group (56.9 ± 4.37µg/g vs 37.71 ± 6.92µg/g, P<0.05). The Perl's staining showed massive iron deposition in the cortex and hippocampus in the nano-lead group. FPNl level in the cotfex of offspring rats on PND21 in the nano-lead group was significantly lower than that in the control group (3.64 ± 0.23 ng/g vs 4.99 ± 0.95 ng/g, P<0.05). FPN1 level in the hippocampus of offspring rats on PND42 in the nano-lead group was significantly lower than that in the control group (2.28 ± 0.51 ng/g vs 3.69 ± 0.69 ng/g, P<0.05). The escape latencies of offspring rats on PND21 and PND42 in the nano-lead group were longer than those in the control group (15.54 ± 2.89 s vs 9.01 ± 4.66 s; 6.16 ± 1.42 s vs 4.26 ± 1.51 s). The numbers of platform crossings of offspring rats on PND21 and PND42 in the nano- lead group were significantly lower than those in the control group (7.77 ± 2.16 times vs 11.2 ± 1.61 times, P<0.05; 8.12 ± 1.51 times vs 13.0 ± 2.21 times, P<0.05). n Nano-lead exposure can result in iron homeostasis disorders in the hippocampus and cortex of offspring rats and affect their learning and memory ability.

  5. Identification of single nucleotide polymorphism in ginger using expressed sequence tags

    PubMed Central

    Chandrasekar, Arumugam; Riju, Aikkal; Sithara, Kandiyl; Anoop, Sahadevan; Eapen, Santhosh J

    2009-01-01

    Ginger (Zingiber officinale Rosc) (Family: Zingiberaceae) is a herbaceous perennial, the rhizomes of which are used as a spice. Ginger is a plant which is well known for its medicinal applications. Recently EST-derived SNPs are a free by-product of the currently expanding EST (Expressed Sequence Tag) databases. The development of high-throughput methods for the detection of SNPs (Single Nucleotide Polymorphism) and small indels (insertion/deletion) has led to a revolution in their use as molecular markers. Available (38139) Ginger EST sequences were mined from dbEST of NCBI. CAP3 program was used to assemble EST sequences into contigs. Candidate SNPs and Indel polymorphisms were detected using the perl script AutoSNP version 1.0 which has used 31905 ESTs for detecting SNPs and Indel sites. We found 64026 SNP sites and 7034 indel polymorphisms with frequency of 0.84 SNPs / 100 bp. Among the three tissues from which the EST libraries had been generated, Rhizomes had high frequency of 1.08 SNPs/indels per 100 bp whereas the leaves had lowest frequency of 0.63 per 100 bp and root is showing relative frequency 0.82/100bp. Transitions and transversion ratio is 0.90. In overall detected SNP, transversion is high when compare to transition. These detected SNPs can be used as markers for genetic studies. Availability The results of the present study hosted in our webserver www.spices.res.in/spicesnip PMID:20198184

  6. A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF

    NASA Astrophysics Data System (ADS)

    Deatrich, D. C.; Liu, S. X.; Tafirout, R.

    2010-04-01

    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.

  7. Phylogenetic diversity and biodiversity indices on phylogenetic networks.

    PubMed

    Wicke, Kristina; Fischer, Mareike

    2018-04-01

    In biodiversity conservation it is often necessary to prioritize the species to conserve. Existing approaches to prioritization, e.g. the Fair Proportion Index and the Shapley Value, are based on phylogenetic trees and rank species according to their contribution to overall phylogenetic diversity. However, in many cases evolution is not treelike and thus, phylogenetic networks have been developed as a generalization of phylogenetic trees, allowing for the representation of non-treelike evolutionary events, such as hybridization. Here, we extend the concepts of phylogenetic diversity and phylogenetic diversity indices from phylogenetic trees to phylogenetic networks. On the one hand, we consider the treelike content of a phylogenetic network, e.g. the (multi)set of phylogenetic trees displayed by a network and the so-called lowest stable ancestor tree associated with it. On the other hand, we derive the phylogenetic diversity of subsets of taxa and biodiversity indices directly from the internal structure of the network. We consider both approaches that are independent of so-called inheritance probabilities as well as approaches that explicitly incorporate these probabilities. Furthermore, we introduce our software package NetDiversity, which is implemented in Perl and allows for the calculation of all generalized measures of phylogenetic diversity and generalized phylogenetic diversity indices established in this note that are independent of inheritance probabilities. We apply our methods to a phylogenetic network representing the evolutionary relationships among swordtails and platyfishes (Xiphophorus: Poeciliidae), a group of species characterized by widespread hybridization. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Comparing the rankings obtained from two biodiversity indices: the Fair Proportion Index and the Shapley Value.

    PubMed

    Wicke, Kristina; Fischer, Mareike

    2017-10-07

    The Shapley Value and the Fair Proportion Index of phylogenetic trees have been frequently discussed as prioritization tools in conservation biology. Both indices rank species according to their contribution to total phylogenetic diversity, allowing for a simple conservation criterion. While both indices have their specific advantages and drawbacks, it has recently been shown that both values are closely related. However, as different authors use different definitions of the Shapley Value, the specific degree of relatedness depends on the specific version of the Shapley Value - it ranges from a high correlation index to equality of the indices. In this note, we first give an overview of the different indices. Then we turn our attention to the mere ranking order provided by either of the indices. We compare the rankings obtained from different versions of the Shapley Value for a phylogenetic tree of European amphibians and illustrate their differences. We then undertake further analyses on simulated data and show that even though the chance of two rankings being exactly identical (when obtained from different versions of the Shapley Value) decreases with an increasing number of taxa, the distance between the two rankings converges to zero, i.e., the rankings are becoming more and more alike. Moreover, we introduce our freely available software package FairShapley, which was implemented in Perl and with which all calculations have been performed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Escape Excel: A tool for preventing gene symbol and accession conversion errors.

    PubMed

    Welsh, Eric A; Stewart, Paul A; Kuenzi, Brent M; Eschrich, James A

    2017-01-01

    Microsoft Excel automatically converts certain gene symbols, database accessions, and other alphanumeric text into dates, scientific notation, and other numerical representations. These conversions lead to subsequent, irreversible, corruption of the imported text. A recent survey of popular genomic literature estimates that one-fifth of all papers with supplementary gene lists suffer from this issue. Here, we present an open-source tool, Escape Excel, which prevents these erroneous conversions by generating an escaped text file that can be safely imported into Excel. Escape Excel is implemented in a variety of formats (http://www.github.com/pstew/escape_excel), including a command line based Perl script, a Windows-only Excel Add-In, an OS X drag-and-drop application, a simple web-server, and as a Galaxy web environment interface. Test server implementations are accessible as a Galaxy interface (http://apostl.moffitt.org) and simple non-Galaxy web server (http://apostl.moffitt.org:8000/). Escape Excel detects and escapes a wide variety of problematic text strings so that they are not erroneously converted into other representations upon importation into Excel. Examples of problematic strings include date-like strings, time-like strings, leading zeroes in front of numbers, and long numeric and alphanumeric identifiers that should not be automatically converted into scientific notation. It is hoped that greater awareness of these potential data corruption issues, together with diligent escaping of text files prior to importation into Excel, will help to reduce the amount of Excel-corrupted data in scientific analyses and publications.

  10. SeedVicious: Analysis of microRNA target and near-target sites.

    PubMed

    Marco, Antonio

    2018-01-01

    Here I describe seedVicious, a versatile microRNA target site prediction software that can be easily fitted into annotation pipelines and run over custom datasets. SeedVicious finds microRNA canonical sites plus other, less efficient, target sites. Among other novel features, seedVicious can compute evolutionary gains/losses of target sites using maximum parsimony, and also detect near-target sites, which have one nucleotide different from a canonical site. Near-target sites are important to study population variation in microRNA regulation. Some analyses suggest that near-target sites may also be functional sites, although there is no conclusive evidence for that, and they may actually be target alleles segregating in a population. SeedVicious does not aim to outperform but to complement existing microRNA prediction tools. For instance, the precision of TargetScan is almost doubled (from 11% to ~20%) when we filter predictions by the distance between target sites using this program. Interestingly, two adjacent canonical target sites are more likely to be present in bona fide target transcripts than pairs of target sites at slightly longer distances. The software is written in Perl and runs on 64-bit Unix computers (Linux and MacOS X). Users with no computing experience can also run the program in a dedicated web-server by uploading custom data, or browse pre-computed predictions. SeedVicious and its associated web-server and database (SeedBank) are distributed under the GPL/GNU license.

  11. MetaCRAST: reference-guided extraction of CRISPR spacers from unassembled metagenomes.

    PubMed

    Moller, Abraham G; Liang, Chun

    2017-01-01

    Clustered regularly interspaced short palindromic repeat (CRISPR) systems are the adaptive immune systems of bacteria and archaea against viral infection. While CRISPRs have been exploited as a tool for genetic engineering, their spacer sequences can also provide valuable insights into microbial ecology by linking environmental viruses to their microbial hosts. Despite this importance, metagenomic CRISPR detection remains a major challenge. Here we present a reference-guided CRISPR spacer detection tool ( Meta genomic C RISPR R eference- A ided S earch T ool-MetaCRAST) that constrains searches based on user-specified direct repeats (DRs). These DRs could be expected from assembly or taxonomic profiles of metagenomes. We compared the performance of MetaCRAST to those of two existing metagenomic CRISPR detection tools-Crass and MinCED-using both real and simulated acid mine drainage (AMD) and enhanced biological phosphorus removal (EBPR) metagenomes. Our evaluation shows MetaCRAST improves CRISPR spacer detection in real metagenomes compared to the de novo CRISPR detection methods Crass and MinCED. Evaluation on simulated metagenomes show it performs better than de novo tools for Illumina metagenomes and comparably for 454 metagenomes. It also has comparable performance dependence on read length and community composition, run time, and accuracy to these tools. MetaCRAST is implemented in Perl, parallelizable through the Many Core Engine (MCE), and takes metagenomic sequence reads and direct repeat queries (FASTA or FASTQ) as input. It is freely available for download at https://github.com/molleraj/MetaCRAST.

  12. Development of a real-time PCR for detection of Staphylococcus pseudintermedius using a novel automated comparison of whole-genome sequences.

    PubMed

    Verstappen, Koen M; Huijbregts, Loes; Spaninks, Mirlin; Wagenaar, Jaap A; Fluit, Ad C; Duim, Birgitta

    2017-01-01

    Staphylococcus pseudintermedius is an opportunistic pathogen in dogs and cats and occasionally causes infections in humans. S. pseudintermedius is often resistant to multiple classes of antimicrobials. It requires a reliable detection so that it is not misidentified as S. aureus. Phenotypic and currently-used molecular-based diagnostic assays lack specificity or are labour-intensive using multiplex PCR or nucleic acid sequencing. The aim of this study was to identify a specific target for real-time PCR by comparing whole genome sequences of S. pseudintermedius and non-pseudintermedius.Genome sequences were downloaded from public repositories and supplemented by isolates that were sequenced in this study. A Perl-script was written that analysed 300-nt fragments from a reference genome sequence of S. pseudintermedius and checked if this sequence was present in other S. pseudintermedius genomes (n = 74) and non-pseudintermedius genomes (n = 138). Six sequences specific for S. pseudintermedius were identified (sequence length between 300-500 nt). One sequence, which was located in the spsJ gene, was used to develop primers and a probe. The real-time PCR showed 100% specificity when testing for S. pseudintermedius isolates (n = 54), and eight other staphylococcal species (n = 43). In conclusion, a novel approach by comparing whole genome sequences identified a sequence that is specific for S. pseudintermedius and provided a real-time PCR target for rapid and reliable detection of S. pseudintermedius.

  13. ToxReporter: viewing the genome through the eyes of a toxicologist.

    PubMed

    Gosink, Mark

    2016-01-01

    One of the many roles of a toxicologist is to determine if an observed adverse event (AE) is related to a previously unrecognized function of a given gene/protein. Towards that end, he or she will search a variety of public and propriety databases for information linking that protein to the observed AE. However, these databases tend to present all available information about a protein, which can be overwhelming, limiting the ability to find information about the specific toxicity being investigated. ToxReporter compiles information from a broad selection of resources and limits display of the information to user-selected areas of interest. ToxReporter is a PERL-based web-application which utilizes a MySQL database to streamline this process by categorizing public and proprietary domain-derived information into predefined safety categories according to a customizable lexicon. Users can view gene information that is 'red-flagged' according to the safety issue under investigation. ToxReporter also uses a scoring system based on relative counts of the red-flags to rank all genes for the amount of information pertaining to each safety issue and to display their scored ranking as an easily interpretable 'Tox-At-A-Glance' chart. Although ToxReporter was originally developed to display safety information, its flexible design could easily be adapted to display disease information as well.Database URL: ToxReporter is freely available at https://github.com/mgosink/ToxReporter. © The Author(s) 2016. Published by Oxford University Press.

  14. Protistan Grazing Analysis by Flow Cytometry Using Prey Labeled by In Vivo Expression of Fluorescent Proteins

    PubMed Central

    Fu, Yutao; O'Kelly, Charles; Sieracki, Michael; Distel, Daniel L.

    2003-01-01

    Selective grazing by protists can profoundly influence bacterial community structure, and yet direct, quantitative observation of grazing selectivity has been difficult to achieve. In this investigation, flow cytometry was used to study grazing by the marine heterotrophic flagellate Paraphysomonas imperforata on live bacterial cells genetically modified to express the fluorescent protein markers green fluorescent protein (GFP) and red fluorescent protein (RFP). Broad-host-range plasmids were constructed that express fluorescent proteins in three bacterial prey species, Escherichia coli, Enterobacter aerogenes, and Pseudomonas putida. Micromonas pusilla, an alga with red autofluorescence, was also used as prey. Predator-prey interactions were quantified by using a FACScan flow cytometer and analyzed by using a Perl program described here. Grazing preference of P. imperforata was influenced by prey type, size, and condition. In competitive feeding trials, P. imperforata consumed algal prey at significantly lower rates than FP (fluorescent protein)-labeled bacteria of similar or different size. Within-species size selection was also observed, but only for P. putida, the largest prey species examined; smaller cells of P. putida were grazed preferentially. No significant difference in clearance rate was observed between GFP- and RFP-labeled strains of the same prey species or between wild-type and GFP-labeled strains. In contrast, the common chemical staining method, 5-(4,6-dichloro-triazin-2-yl)-amino fluorescein hydrochloride, depressed clearance rates for bacterial prey compared to unlabeled or RFP-labeled cells. PMID:14602649

  15. Lightweight application for generating clinical research information systems: MAGIC.

    PubMed

    Leskošek, Brane; Pajntar, Marjan

    2015-12-01

    Our purpose was to build and test a lightweight solution for generating clinical research information systems (CRIS) that would allow non-IT professionals with basic knowledge of computer usage to quickly define and build a ready-to-use, safe and secure web-based clinical research system for data management. We use the acronym MAGIC (Medical Application Generator InteraCtive) for the system. The generated CRIS should be very easy to build and use, so a common LAMP (Linux Apache MySQL Perl) platform was used, which also enables short development cycles. The application was built and tested using eXtreme Programming (XP) principles by a small development team consisting of one informatics specialist, one physician and one graphical designer/programmer. The parameter and graphical user interface (GUI) definitions for the CRIS can be made by non-IT professionals using an intuitive English-language-like formalism called application definition language (ADL). From these definitions, the MAGIC builds an end-user CRIS that can be used on a wide variety of platforms (from standard workstations to hand-held devices). A working example of a national health-care-quality assessment program is presented to illustrate this process. The lightweight application for generating CRIS (MAGIC) has proven to be useful for both clinical and analytical users in real working environment. To achieve better performance and interoperability, we are planning to recompile the application using XML schemas (XSD) in HL7 CDA or openEHR archetypes formats used for parameters definition and for data interchange between different information systems.

  16. Visualization of nigrosome 1 and its loss in PD

    PubMed Central

    Schwarz, Stefan T.; Pitiot, Alain; Stephenson, Mary C.; Lowe, James; Bajaj, Nin; Bowtell, Richard W.; Auer, Dorothee P.; Gowland, Penny A.

    2013-01-01

    Objective: This study assessed whether high-resolution 7 T MRI allowed direct in vivo visualization of nigrosomes, substructures of the substantia nigra pars compacta (SNpc) undergoing the greatest and earliest dopaminergic cell loss in Parkinson disease (PD), and whether any disease-specific changes could be detected in patients with PD. Methods: Postmortem (PM) midbrains, 2 from healthy controls (HCs) and 1 from a patient with PD, were scanned with high-resolution T2*-weighted MRI scans, sectioned, and stained for iron and neuromelanin (Perl), TH, and calbindin. To confirm the identification of nigrosomes in vivo on 7 T T2*-weighted scans, we assessed colocalization with neuromelanin-sensitive T1-weighted scans. We then assessed the ability to depict PD pathology on in vivo T2*-weighted scans by comparing data from 10 patients with PD and 8 age- and sex-matched HCs. Results: A hyperintense, ovoid area within the dorsolateral border of the otherwise hypointense SNpc was identified in the HC brains on in vivo and PM T2*-weighted MRI. Location, size, shape, and staining characteristics conform to nigrosome 1. Blinded assessment by 2 neuroradiologists showed consistent bilateral absence of this nigrosome feature in all 10 patients with PD, and bilateral presence in 7/8 HC. Conclusions: In vivo and PM MRI with histologic correlation demonstrates that high-resolution 7 T MRI can directly visualize nigrosome 1. The absence of nigrosome 1 in the SNpc on MRI scans might prove useful in developing a neuroimaging diagnostic test for PD. PMID:23843466

  17. SNPmplexViewer--toward a cost-effective traceability system

    PubMed Central

    2011-01-01

    Background Beef traceability has become mandatory in many regions of the world and is typically achieved through the use of unique numerical codes on ear tags and animal passports. DNA-based traceability uses the animal's own DNA code to identify it and the products derived from it. Using SNaPshot, a primer-extension-based method, a multiplex of 25 SNPs in a single reaction has been practiced for reducing the expense of genotyping a panel of SNPs useful for identity control. Findings To further decrease SNaPshot's cost, we introduced the Perl script SNPmplexViewer, which facilitates the analysis of trace files for reactions performed without the use of fluorescent size standards. SNPmplexViewer automatically aligns reference and target trace electropherograms, run with and without fluorescent size standards, respectively. SNPmplexViewer produces a modified target trace file containing a normalised trace in which the reference size standards are embedded. SNPmplexViewer also outputs aligned images of the two electropherograms together with a difference profile. Conclusions Modified trace files generated by SNPmplexViewer enable genotyping of SnaPshot reactions performed without fluorescent size standards, using common fragment-sizing software packages. SNPmplexViewer's normalised output may also improve the genotyping software's performance. Thus, SNPmplexViewer is a general free tool enabling the reduction of SNaPshot's cost as well as the fast viewing and comparing of trace electropherograms for fragment analysis. SNPmplexViewer is available at http://cowry.agri.huji.ac.il/cgi-bin/SNPmplexViewer.cgi. PMID:21600063

  18. Lambert W function for applications in physics

    NASA Astrophysics Data System (ADS)

    Veberič, Darko

    2012-12-01

    The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.

  19. Design and Implementation of the CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-10-01

    With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, andmore » element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.« less

  20. Integrating medical imaging analyses through a high-throughput bundled resource imaging system

    NASA Astrophysics Data System (ADS)

    Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.

    2011-03-01

    Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.

  1. The GMOD Drupal bioinformatic server framework.

    PubMed

    Papanicolaou, Alexie; Heckel, David G

    2010-12-15

    Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com.

  2. NGSPanPipe: A Pipeline for Pan-genome Identification in Microbial Strains from Experimental Reads.

    PubMed

    Kulsum, Umay; Kapil, Arti; Singh, Harpreet; Kaur, Punit

    2018-01-01

    Recent advancements in sequencing technologies have decreased both time span and cost for sequencing the whole bacterial genome. High-throughput Next-Generation Sequencing (NGS) technology has led to the generation of enormous data concerning microbial populations publically available across various repositories. As a consequence, it has become possible to study and compare the genomes of different bacterial strains within a species or genus in terms of evolution, ecology and diversity. Studying the pan-genome provides insights into deciphering microevolution, global composition and diversity in virulence and pathogenesis of a species. It can also assist in identifying drug targets and proposing vaccine candidates. The effective analysis of these large genome datasets necessitates the development of robust tools. Current methods to develop pan-genome do not support direct input of raw reads from the sequencer machine but require preprocessing of reads as an assembled protein/gene sequence file or the binary matrix of orthologous genes/proteins. We have designed an easy-to-use integrated pipeline, NGSPanPipe, which can directly identify the pan-genome from short reads. The output from the pipeline is compatible with other pan-genome analysis tools. We evaluated our pipeline with other methods for developing pan-genome, i.e. reference-based assembly and de novo assembly using simulated reads of Mycobacterium tuberculosis. The single script pipeline (pipeline.pl) is applicable for all bacterial strains. It integrates multiple in-house Perl scripts and is freely accessible from https://github.com/Biomedinformatics/NGSPanPipe .

  3. Security Data Warehouse Application

    NASA Technical Reports Server (NTRS)

    Vernon, Lynn R.; Hennan, Robert; Ortiz, Chris; Gonzalez, Steve; Roane, John

    2012-01-01

    The Security Data Warehouse (SDW) is used to aggregate and correlate all JSC IT security data. This includes IT asset inventory such as operating systems and patch levels, users, user logins, remote access dial-in and VPN, and vulnerability tracking and reporting. The correlation of this data allows for an integrated understanding of current security issues and systems by providing this data in a format that associates it to an individual host. The cornerstone of the SDW is its unique host-mapping algorithm that has undergone extensive field tests, and provides a high degree of accuracy. The algorithm comprises two parts. The first part employs fuzzy logic to derive a best-guess host assignment using incomplete sensor data. The second part is logic to identify and correct errors in the database, based on subsequent, more complete data. Host records are automatically split or merged, as appropriate. The process had to be refined and thoroughly tested before the SDW deployment was feasible. Complexity was increased by adding the dimension of time. The SDW correlates all data with its relationship to time. This lends support to forensic investigations, audits, and overall situational awareness. Another important feature of the SDW architecture is that all of the underlying complexities of the data model and host-mapping algorithm are encapsulated in an easy-to-use and understandable Perl language Application Programming Interface (API). This allows the SDW to be quickly augmented with additional sensors using minimal coding and testing. It also supports rapid generation of ad hoc reports and integration with other information systems.

  4. Effects of Author Contribution Disclosures and Numeric Limitations on Authorship Trends

    PubMed Central

    McDonald, Robert J.; Neff, Kevin L.; Rethlefsen, Melissa L.; Kallmes, David F.

    2010-01-01

    OBJECTIVE: To determine whether editorial policies designed to eliminate gratuitous authorship (globally referred to as authorship limitation policies), including author contribution disclosures and/or numeric restrictions, have significantly affected authorship trends during a 20-year period. METHODS: We used a custom PERL-based algorithm to extract data, including number of authors, publication date, and article subtype, from articles published from January 1, 1986, through December 31, 2006, in 16 medical journals (8 with explicit authorship guidelines restricting authorship and 8 without formal authorship policies), comprising 307,190 articles. Trends in the mean number of authors per article, sorted by journal type, article subtype, and presence of authorship limitations, were determined using Sen's slope analysis and compared using analysis of variance and matched-pair analysis. Trend data were compared among the journals that had implemented 1 or both of these formal restrictive authorship policies and those that had not in order to determine their effect on authorship over time. RESULTS: The number of authors per article has been increasing among all journals at a mean ± SD rate of 0.076±0.057 authors per article per year. No significant differences in authorship rate were observed between journals with and without authorship limits before enforcement (F=1.097; P=.30). After enforcement, no significant change in authorship rates was observed (matched pair: F=0.425; P=.79). CONCLUSION: Implementation of authorship limitation policies does not slow the trend of increasing numbers of authors per article over time. PMID:20884825

  5. GENE-Counter: A Computational Pipeline for the Analysis of RNA-Seq Data for Gene Expression Differences

    PubMed Central

    Di, Yanming; Schafer, Daniel W.; Wilhelm, Larry J.; Fox, Samuel E.; Sullivan, Christopher M.; Curzon, Aron D.; Carrington, James C.; Mockler, Todd C.; Chang, Jeff H.

    2011-01-01

    GENE-counter is a complete Perl-based computational pipeline for analyzing RNA-Sequencing (RNA-Seq) data for differential gene expression. In addition to its use in studying transcriptomes of eukaryotic model organisms, GENE-counter is applicable for prokaryotes and non-model organisms without an available genome reference sequence. For alignments, GENE-counter is configured for CASHX, Bowtie, and BWA, but an end user can use any Sequence Alignment/Map (SAM)-compliant program of preference. To analyze data for differential gene expression, GENE-counter can be run with any one of three statistics packages that are based on variations of the negative binomial distribution. The default method is a new and simple statistical test we developed based on an over-parameterized version of the negative binomial distribution. GENE-counter also includes three different methods for assessing differentially expressed features for enriched gene ontology (GO) terms. Results are transparent and data are systematically stored in a MySQL relational database to facilitate additional analyses as well as quality assessment. We used next generation sequencing to generate a small-scale RNA-Seq dataset derived from the heavily studied defense response of Arabidopsis thaliana and used GENE-counter to process the data. Collectively, the support from analysis of microarrays as well as the observed and substantial overlap in results from each of the three statistics packages demonstrates that GENE-counter is well suited for handling the unique characteristics of small sample sizes and high variability in gene counts. PMID:21998647

  6. WGSSAT: A High-Throughput Computational Pipeline for Mining and Annotation of SSR Markers From Whole Genomes.

    PubMed

    Pandey, Manmohan; Kumar, Ravindra; Srivastava, Prachi; Agarwal, Suyash; Srivastava, Shreya; Nagpure, Naresh S; Jena, Joy K; Kushwaha, Basdeo

    2018-03-16

    Mining and characterization of Simple Sequence Repeat (SSR) markers from whole genomes provide valuable information about biological significance of SSR distribution and also facilitate development of markers for genetic analysis. Whole genome sequencing (WGS)-SSR Annotation Tool (WGSSAT) is a graphical user interface pipeline developed using Java Netbeans and Perl scripts which facilitates in simplifying the process of SSR mining and characterization. WGSSAT takes input in FASTA format and automates the prediction of genes, noncoding RNA (ncRNA), core genes, repeats and SSRs from whole genomes followed by mapping of the predicted SSRs onto a genome (classified according to genes, ncRNA, repeats, exonic, intronic, and core gene region) along with primer identification and mining of cross-species markers. The program also generates a detailed statistical report along with visualization of mapped SSRs, genes, core genes, and RNAs. The features of WGSSAT were demonstrated using Takifugu rubripes data. This yielded a total of 139 057 SSR, out of which 113 703 SSR primer pairs were uniquely amplified in silico onto a T. rubripes (fugu) genome. Out of 113 703 mined SSRs, 81 463 were from coding region (including 4286 exonic and 77 177 intronic), 7 from RNA, 267 from core genes of fugu, whereas 105 641 SSR and 601 SSR primer pairs were uniquely mapped onto the medaka genome. WGSSAT is tested under Ubuntu Linux. The source code, documentation, user manual, example dataset and scripts are available online at https://sourceforge.net/projects/wgssat-nbfgr.

  7. Using Internet-Based Automated Software to Process GPS Data at Michigan Tech University

    NASA Astrophysics Data System (ADS)

    Crook, A.; Diehl, J. F.

    2003-12-01

    The Michigan Tech University GPS monument was made operational in October of 2002. The monument, which consists of a concrete pillar extending approximately 10 feet below the surface and protrudes 5 feet above ground, is located at the Houghton County Memorial Airport (47.171803° N, 88.498361° W). The primary purpose of the monument is to measure the velocity of the North American Plate at this location. A Trimble 4000ssi geodetic receiver with a Trimble Zephyr antenna is used to collect GPS data. The data are sent to a PC where they are processed using Auto-GIPSY, an internet-based GPS processing utility, which makes it possible to process GPS data, via email, without having knowledge of how the software works. Two Perl scripts were written to facilitate automation and to simplify processing of the GPS data even further. Twelve months of GPS data were processed, using Auto-GIPSY, which produced a velocity of -24 +/- 5 mm/yr and -4 +/- 6 mm/yr for the X and Y components respectively with an azimuth of 261° with respect to the ITRF2000. This calculated result compares well with the NNR-NUVEL1A velocity of -17 mm/yr and -1 mm/yr for the X and Y components respectively with an azimuth of 267° . The results from an alternative online processing service, the Scripps Coordinate Update Tool (SCOUT) that uses GAMIT, will also be presented as a comparative method.

  8. Towards the Development of a Unified Distributed Date System for L1 Spacecraft

    NASA Technical Reports Server (NTRS)

    Lazarus, Alan J.; Kasper, Justin C.

    2005-01-01

    The purpose of this grant, 'Towards the Development of a Unified Distributed Data System for L1 Spacecraft', is to take the initial steps towards the development of a data distribution mechanism for making in-situ measurements more easily accessible to the scientific community. Our obligations as subcontractors to this grant are to add our Faraday Cup plasma data to this initial study and to contribute to the design of a general data distribution system. The year 1 objectives of the overall project as stated in the GSFC proposal are: 1) Both the rsync and Perl based data exchange tools will be fully developed and tested in our mixed, Unix, VMS, Windows and Mac OS X data service environment. Based on the performance comparisons, one will be selected and fully deployed. Continuous data exchange between all L1 solar wind monitors initiated. 2) Data version metadata will be agreed upon, fully documented, and deployed on our data sites. 3) The first version of the data description rules, encoded in a XML Schema, will be finalized. 4) Preliminary set of library routines will be collected, documentation standards and formats agreed on, and desirable routines that have not been implemented identified and assigned. 5) ViSBARD test site implemented to independently validate data mirroring procedures. The specific MIT tasks over the duration of this project are the following: a) implement mirroring service for WIND plasma data b) participate in XML Schema development c) contribute toward routine library.

  9. Intragenomic polymorphisms among high-copy loci: a genus-wide study of nuclear ribosomal DNA in Asclepias (Apocynaceae).

    PubMed

    Weitemier, Kevin; Straub, Shannon C K; Fishbein, Mark; Liston, Aaron

    2015-01-01

    Despite knowledge that concerted evolution of high-copy loci is often imperfect, studies that investigate the extent of intragenomic polymorphisms and comparisons across a large number of species are rarely made. We present a bioinformatic pipeline for characterizing polymorphisms within an individual among copies of a high-copy locus. Results are presented for nuclear ribosomal DNA (nrDNA) across the milkweed genus, Asclepias. The 18S-26S portion of the nrDNA cistron of Asclepias syriaca served as a reference for assembly of the region from 124 samples representing 90 species of Asclepias. Reads were mapped back to each individual's consensus and at each position reads differing from the consensus were tallied using a custom perl script. Low frequency polymorphisms existed in all individuals (mean = 5.8%). Most nrDNA positions (91%) were polymorphic in at least one individual, with polymorphic sites being less frequent in subunit regions and loops. Highly polymorphic sites existed in each individual, with highest abundance in the "noncoding" ITS regions. Phylogenetic signal was present in the distribution of intragenomic polymorphisms across the genus. Intragenomic polymorphisms in nrDNA are common in Asclepias, being found at higher frequency than any other study to date. The high and variable frequency of polymorphisms across species highlights concerns that phylogenetic applications of nrDNA may be error-prone. The new analytical approach provided here is applicable to other taxa and other high-copy regions characterized by low coverage genome sequencing (genome skimming).

  10. A computational genomics pipeline for prokaryotic sequencing projects.

    PubMed

    Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King

    2010-08-01

    New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.

  11. Codon usage bias and phylogenetic analysis of mitochondrial ND1 gene in pisces, aves, and mammals.

    PubMed

    Uddin, Arif; Choudhury, Monisha Nath; Chakraborty, Supriyo

    2018-01-01

    The mitochondrially encoded NADH:ubiquinone oxidoreductase core subunit 1 (MT-ND1) gene is a subunit of the respiratory chain complex I and involved in the first step of the electron transport chain of oxidative phosphorylation (OXPHOS). To understand the pattern of compositional properties, codon usage and expression level of mitochondrial ND1 genes in pisces, aves, and mammals, we used bioinformatic approaches as no work was reported earlier. In this study, a perl script was used for calculating nucleotide contents and different codon usage bias parameters. The codon usage bias of MT-ND1 was low but the expression level was high as revealed from high ENC and CAI value. Correspondence analysis (COA) suggests that the pattern of codon usage for MT-ND1 gene is not same across species and that compositional constraint played an important role in codon usage pattern of this gene among pisces, aves, and mammals. From the regression equation of GC12 on GC3, it can be inferred that the natural selection might have played a dominant role while mutation pressure played a minor role in influencing the codon usage patterns. Further, ND1 gene has a discrepancy with cytochrome B (CYB) gene in preference of codons as evident from COA. The codon usage bias was low. It is influenced by nucleotide composition, natural selection, mutation pressure, length (number) of amino acids, and relative dinucleotide composition. This study helps in understanding the molecular biology, genetics, evolution of MT-ND1 gene, and also for designing a synthetic gene.

  12. Characterisation of the vascular pathology in Sigmodon hispidus (Rodentia: Cricetidae) following experimental infection with Angiostrongylus costaricensis (Nematoda: Metastrongylidae).

    PubMed

    Vasconcelos, Danielle Ingrid Bezerra de; Mota, Ester Maria; Pelajo-Machado, Marcelo

    2017-05-01

    Angiostrongylus costaricensis is a nematode that causes human abdominal angiostrongyliasis, a disease found mainly in Latin American countries and particularly in Brazil and Costa Rica. Its life cycle involves exploitation of both invertebrate and vertebrate hosts. Its natural reservoir is a vertebrate host, the cotton rat Sigmodon hispidus. The adult worms live in the ileo-colic branches of the upper mesenteric artery of S. hispidus, causing periarteritis. However, there is a lack of data on the development of vasculitis in the course of infection. To describe the histopathology of vascular lesions in S. hispidus following infection with A. costaricensis. Twenty-one S. hispidus were euthanised at 30, 50, 90 and 114 days post-infection (dpi), and guts and mesentery (including the cecal artery) were collected. Tissues were fixed in Carson's Millonig formalin, histologically processed for paraffin embedding, sectioned with a rotary microtome, and stained with hematoxylin-eosin, resorcin-fuchsin, Perls, Sirius Red (pH = 10.2), Congo Red, and Azan trichrome for brightfield microscopy analysis. At 30 and 50 dpi, live eggs and larvae were present inside the vasa vasorum of the cecal artery, leading to eosinophil infiltrates throughout the vessel adventitia and promoting centripetal vasculitis with disruption of the elastic layers. Disease severity increased at 90 and 114 dpi, when many worms had died and the intensity of the vascular lesions was greatest, with intimal alterations, thrombus formation, iron accumulation, and atherosclerosis. In addition to abdominal angiostrongyliasis, our data suggest that this model could be very useful for autoimune vasculitis and atherosclerosis studies.

  13. Accessing the SEED genome databases via Web services API: tools for programmers.

    PubMed

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  14. Automated design of paralogue ratio test assays for the accurate and rapid typing of copy number variation

    PubMed Central

    Veal, Colin D.; Xu, Hang; Reekie, Katherine; Free, Robert; Hardwick, Robert J.; McVey, David; Brookes, Anthony J.; Hollox, Edward J.; Talbot, Christopher J.

    2013-01-01

    Motivation: Genomic copy number variation (CNV) can influence susceptibility to common diseases. High-throughput measurement of gene copy number on large numbers of samples is a challenging, yet critical, stage in confirming observations from sequencing or array Comparative Genome Hybridization (CGH). The paralogue ratio test (PRT) is a simple, cost-effective method of accurately determining copy number by quantifying the amplification ratio between a target and reference amplicon. PRT has been successfully applied to several studies analyzing common CNV. However, its use has not been widespread because of difficulties in assay design. Results: We present PRTPrimer (www.prtprimer.org) software for automated PRT assay design. In addition to stand-alone software, the web site includes a database of pre-designed assays for the human genome at an average spacing of 6 kb and a web interface for custom assay design. Other reference genomes can also be analyzed through local installation of the software. The usefulness of PRTPrimer was tested within known CNV, and showed reproducible quantification. This software and database provide assays that can rapidly genotype CNV, cost-effectively, on a large number of samples and will enable the widespread adoption of PRT. Availability: PRTPrimer is available in two forms: a Perl script (version 5.14 and higher) that can be run from the command line on Linux systems and as a service on the PRTPrimer web site (www.prtprimer.org). Contact: cjt14@le.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:23742985

  15. The Human Ageing Genomic Resources: online databases and tools for biogerontologists

    PubMed Central

    de Magalhães, João Pedro; Budovsky, Arie; Lehmann, Gilad; Costa, Joana; Li, Yang; Fraifeld, Vadim; Church, George M.

    2009-01-01

    Summary Ageing is a complex, challenging phenomenon that will require multiple, interdisciplinary approaches to unravel its puzzles. To assist basic research on ageing, we developed the Human Ageing Genomic Resources (HAGR). This work provides an overview of the databases and tools in HAGR and describes how the gerontology research community can employ them. Several recent changes and improvements to HAGR are also presented. The two centrepieces in HAGR are GenAge and AnAge. GenAge is a gene database featuring genes associated with ageing and longevity in model organisms, a curated database of genes potentially associated with human ageing, and a list of genes tested for their association with human longevity. A myriad of biological data and information is included for hundreds of genes, making GenAge a reference for research that reflects our current understanding of the genetic basis of ageing. GenAge can also serve as a platform for the systems biology of ageing, and tools for the visualization of protein-protein interactions are also included. AnAge is a database of ageing in animals, featuring over 4,000 species, primarily assembled as a resource for comparative and evolutionary studies of ageing. Longevity records, developmental and reproductive traits, taxonomic information, basic metabolic characteristics, and key observations related to ageing are included in AnAge. Software is also available to aid researchers in the form of Perl modules to automate numerous tasks and as an SPSS script to analyse demographic mortality data. The Human Ageing Genomic Resources are available online at http://genomics.senescence.info. PMID:18986374

  16. Escape Excel: A tool for preventing gene symbol and accession conversion errors

    PubMed Central

    Stewart, Paul A.; Kuenzi, Brent M.; Eschrich, James A.

    2017-01-01

    Background Microsoft Excel automatically converts certain gene symbols, database accessions, and other alphanumeric text into dates, scientific notation, and other numerical representations. These conversions lead to subsequent, irreversible, corruption of the imported text. A recent survey of popular genomic literature estimates that one-fifth of all papers with supplementary gene lists suffer from this issue. Results Here, we present an open-source tool, Escape Excel, which prevents these erroneous conversions by generating an escaped text file that can be safely imported into Excel. Escape Excel is implemented in a variety of formats (http://www.github.com/pstew/escape_excel), including a command line based Perl script, a Windows-only Excel Add-In, an OS X drag-and-drop application, a simple web-server, and as a Galaxy web environment interface. Test server implementations are accessible as a Galaxy interface (http://apostl.moffitt.org) and simple non-Galaxy web server (http://apostl.moffitt.org:8000/). Conclusions Escape Excel detects and escapes a wide variety of problematic text strings so that they are not erroneously converted into other representations upon importation into Excel. Examples of problematic strings include date-like strings, time-like strings, leading zeroes in front of numbers, and long numeric and alphanumeric identifiers that should not be automatically converted into scientific notation. It is hoped that greater awareness of these potential data corruption issues, together with diligent escaping of text files prior to importation into Excel, will help to reduce the amount of Excel-corrupted data in scientific analyses and publications. PMID:28953918

  17. Central Satellite Data Repository Supporting Research and Development

    NASA Astrophysics Data System (ADS)

    Han, W.; Brust, J.

    2015-12-01

    Near real-time satellite data is critical to many research and development activities of atmosphere, land, and ocean processes. Acquiring and managing huge volumes of satellite data without (or with less) latency in an organization is always a challenge in the big data age. An organization level data repository is a practical solution to meeting this challenge. The STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR) is a scalable, stable, and reliable repository to acquire, manipulate, and disseminate various types of satellite data in an effective and efficient manner. SCDR collects more than 200 data products, which are commonly used by multiple groups in STAR, from NOAA, GOES, Metop, Suomi NPP, Sentinel, Himawari, and other satellites. The processes of acquisition, recording, retrieval, organization, and dissemination are performed in parallel. Multiple data access interfaces, like FTP, FTPS, HTTP, HTTPS, and RESTful, are supported in the SCDR to obtain satellite data from their providers through high speed internet. The original satellite data in various raster formats can be parsed in the respective adapter to retrieve data information. The data information is ingested to the corresponding partitioned tables in the central database. All files are distributed equally on the Network File System (NFS) disks to balance the disk load. SCDR provides consistent interfaces (including Perl utility, portal, and RESTful Web service) to locate files of interest easily and quickly and access them directly by over 200 compute servers via NFS. SCDR greatly improves collection and integration of near real-time satellite data, addresses satellite data requirements of scientists and researchers, and facilitates their primary research and development activities.

  18. A UBK-space Visualization Tool for the Magnetosphere

    NASA Astrophysics Data System (ADS)

    Mohan, M.; Sheldon, R. B.

    2001-12-01

    One of the stumbling blocks to understanding particle transport in the magnetosphere has been the difficulty to follow, track and model the motion of ions through the realistic magnetic and electric fields of the Earth. Under the weak assumption that the first two invariants remain conserved, Whipple [1978] found a coordinate transformation that makes all charged particles travel on straight lines in UBK-space. The transform permits the quantitative calculation of conservative phase space transport for all particles with energies less than ~100 MeV, especially ring current energies (Sheldon and Gaffey [1993]). Furthermore Sheldon and Eastman [1997] showed how this transform extended the validity of diffusion models to realistic magnetospheres over the entire energy range. However, widespread usage of this transform has been limited by its non-intuitive UBK coordinates. We present a Virtual Reality Meta Language (VRML) interface to the calculation of UBK transform demonstrating its usefulness in describing both static features of the magnetosphere, such as the plasmapause, and dynamic features, such as ring current injection and loss. The core software is written in C for speed, whereas the interface is constructed in Perl and Javascript. The code is freely available, and intended for portability and modularity. R.B. Sheldon and T. Eastman ``Particle Transport in the Magnetosphere: A New Diffusion Model", GRL, 24(7), 811-814, 1997. Whipple, Jr, E. C. ``(U,B,K) coordinates: A natural system for studying magnetospheric convection". JGR, 83, 4318-4326, 1978. Sheldon, R. B. and J. D. Gaffey, Jr. ``Particle tracing in the magnetosphere: New algorithms and results." GRL, 20, 767-770, 1993.

  19. An integrated phenomic approach to multivariate allelic association

    PubMed Central

    Medland, Sarah Elizabeth; Neale, Michael Churton

    2010-01-01

    The increased feasibility of genome-wide association has resulted in association becoming the primary method used to localize genetic variants that cause phenotypic variation. Much attention has been focused on the vast multiple testing problems arising from analyzing large numbers of single nucleotide polymorphisms. However, the inflation of experiment-wise type I error rates through testing numerous phenotypes has received less attention. Multivariate analyses can be used to detect both pleiotropic effects that influence a latent common factor, and monotropic effects that operate at a variable-specific levels, whilst controlling for non-independence between phenotypes. In this study, we present a maximum likelihood approach, which combines both latent and variable-specific tests and which may be used with either individual or family data. Simulation results indicate that in the presence of factor-level association, the combined multivariate (CMV) analysis approach performs well with a minimal loss of power as compared with a univariate analysis of a factor or sum score (SS). As the deviation between the pattern of allelic effects and the factor loadings increases, the power of univariate analyses of both factor and SSs decreases dramatically, whereas the power of the CMV approach is maintained. We show the utility of the approach by examining the association between dopamine receptor D2 TaqIA and the initiation of marijuana, tranquilizers and stimulants in data from the Add Health Study. Perl scripts that takes ped and dat files as input and produces Mx scripts and data for running the CMV approach can be downloaded from www.vipbg.vcu.edu/~sarahme/WriteMx. PMID:19707246

  20. Lectindb: a plant lectin database.

    PubMed

    Chandra, Nagasuma R; Kumar, Nirmal; Jeyakani, Justin; Singh, Desh Deepak; Gowda, Sharan B; Prathima, M N

    2006-10-01

    Lectins, a class of carbohydrate-binding proteins, are now widely recognized to play a range of crucial roles in many cell-cell recognition events triggering several important cellular processes. They encompass different members that are diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities, and specificities as well as their larger biological roles and potential applications. It is not surprising, therefore, that the vast amount of experimental data on lectins available in the literature is so diverse, that it becomes difficult and time consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. To achieve an effective use of all the data toward understanding the function and their possible applications, an organization of these seemingly independent data into a common framework is essential. An integrated knowledge base ( Lectindb, http://nscdb.bic.physics.iisc.ernet.in ) together with appropriate analytical tools has therefore been developed initially for plant lectins by collating and integrating diverse data. The database has been implemented using MySQL on a Linux platform and web-enabled using PERL-CGI and Java tools. Data for each lectin pertain to taxonomic, biochemical, domain architecture, molecular sequence, and structural details as well as carbohydrate and hence blood group specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value not only for basic studies in lectin biology but also for basic studies in pursuing several applications in biotechnology, immunology, and clinical practice, using these molecules.

Top