CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2015-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.
Pynamic: the Python Dynamic Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, G L; Ahn, D H; de Supinksi, B R
2007-07-10
Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, wemore » present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.« less
PyMidas: Interface from Python to Midas
NASA Astrophysics Data System (ADS)
Maisala, Sami; Oittinen, Tero
2014-01-01
PyMidas is an interface between Python and MIDAS, the major ESO legacy general purpose data processing system. PyMidas allows a user to exploit both the rich legacy of MIDAS software and the power of Python scripting in a unified interactive environment. PyMidas also allows the usage of other Python-based astronomical analysis systems such as PyRAF.
BioServices: a common Python package to access biological Web Services programmatically.
Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio
2013-12-15
Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.
EMPIRE and pyenda: Two ensemble-based data assimilation systems written in Fortran and Python
NASA Astrophysics Data System (ADS)
Geppert, Gernot; Browne, Phil; van Leeuwen, Peter Jan; Merker, Claire
2017-04-01
We present and compare the features of two ensemble-based data assimilation frameworks, EMPIRE and pyenda. Both frameworks allow to couple models to the assimilation codes using the Message Passing Interface (MPI), leading to extremely efficient and fast coupling between models and the data-assimilation codes. The Fortran-based system EMPIRE (Employing Message Passing Interface for Researching Ensembles) is optimized for parallel, high-performance computing. It currently includes a suite of data assimilation algorithms including variants of the ensemble Kalman and several the particle filters. EMPIRE is targeted at models of all kinds of complexity and has been coupled to several geoscience models, eg. the Lorenz-63 model, a barotropic vorticity model, the general circulation model HadCM3, the ocean model NEMO, and the land-surface model JULES. The Python-based system pyenda (Python Ensemble Data Assimilation) allows Fortran- and Python-based models to be used for data assimilation. Models can be coupled either using MPI or by using a Python interface. Using Python allows quick prototyping and pyenda is aimed at small to medium scale models. pyenda currently includes variants of the ensemble Kalman filter and has been coupled to the Lorenz-63 model, an advection-based precipitation nowcasting scheme, and the dynamic global vegetation model JSBACH.
Vella, Michael; Cannon, Robert C; Crook, Sharon; Davison, Andrew P; Ganapathy, Gautham; Robinson, Hugh P C; Silver, R Angus; Gleeson, Padraig
2014-01-01
NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.
Vella, Michael; Cannon, Robert C.; Crook, Sharon; Davison, Andrew P.; Ganapathy, Gautham; Robinson, Hugh P. C.; Silver, R. Angus; Gleeson, Padraig
2014-01-01
NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment. PMID:24795618
Python-Assisted MODFLOW Application and Code Development
NASA Astrophysics Data System (ADS)
Langevin, C.
2013-12-01
The U.S. Geological Survey (USGS) has a long history of developing and maintaining free, open-source software for hydrological investigations. The MODFLOW program is one of the most popular hydrologic simulation programs released by the USGS, and it is considered to be the most widely used groundwater flow simulation code. MODFLOW was written using a modular design and a procedural FORTRAN style, which resulted in code that could be understood, modified, and enhanced by many hydrologists. The code is fast, and because it uses standard FORTRAN it can be run on most operating systems. Most MODFLOW users rely on proprietary graphical user interfaces for constructing models and viewing model results. Some recent efforts, however, have focused on construction of MODFLOW models using open-source Python scripts. Customizable Python packages, such as FloPy (https://code.google.com/p/flopy), can be used to generate input files, read simulation results, and visualize results in two and three dimensions. Automating this sequence of steps leads to models that can be reproduced directly from original data and rediscretized in space and time. Python is also being used in the development and testing of new MODFLOW functionality. New packages and numerical formulations can be quickly prototyped and tested first with Python programs before implementation in MODFLOW. This is made possible by the flexible object-oriented design capabilities available in Python, the ability to call FORTRAN code from Python, and the ease with which linear systems of equations can be solved using SciPy, for example. Once new features are added to MODFLOW, Python can then be used to automate comprehensive regression testing and ensure reliability and accuracy of new versions prior to release.
A New Python Library for Spectroscopic Analysis with MIDAS Style
NASA Astrophysics Data System (ADS)
Song, Y.; Luo, A.; Zhao, Y.
2013-10-01
The ESO MIDAS is a system for astronomers to analyze data which many astronomers are using. Python is a high level script language and there are many applications for astronomical data process. We are releasing a new Python library which realizes some MIDAS commands in Python. People can use it to write a MIDAS style Python code. We call it PydasLib. It is a Python library based on ESO MIDAS functions, which is easily used by astronomers who are familiar with the usage of MIDAS.
AST: World Coordinate Systems in Astronomy
NASA Astrophysics Data System (ADS)
Berry, David S.; Warren-Smith, Rodney F.
2014-04-01
The AST library provides a comprehensive range of facilities for attaching world coordinate systems to astronomical data, for retrieving and interpreting that information in a variety of formats, including FITS-WCS, and for generating graphical output based on it. Core projection algorithms are provided by WCSLIB (ascl:1108.003) and astrometry is provided by the PAL (ascl:1606.002) and SOFA (ascl:1403.026) libraries. AST bindings are available in Python (pyast), Java (JNIAST) and Perl (Starlink::AST). AST is used as the plotting and astrometry library in DS9 and GAIA, and is distributed separately and as part of the Starlink software collection.
pyPaSWAS: Python-based multi-core CPU and GPU sequence alignment.
Warris, Sven; Timal, N Roshan N; Kempenaar, Marcel; Poortinga, Arne M; van de Geest, Henri; Varbanescu, Ana L; Nap, Jan-Peter
2018-01-01
Our previously published CUDA-only application PaSWAS for Smith-Waterman (SW) sequence alignment of any type of sequence on NVIDIA-based GPUs is platform-specific and therefore adopted less than could be. The OpenCL language is supported more widely and allows use on a variety of hardware platforms. Moreover, there is a need to promote the adoption of parallel computing in bioinformatics by making its use and extension more simple through more and better application of high-level languages commonly used in bioinformatics, such as Python. The novel application pyPaSWAS presents the parallel SW sequence alignment code fully packed in Python. It is a generic SW implementation running on several hardware platforms with multi-core systems and/or GPUs that provides accurate sequence alignments that also can be inspected for alignment details. Additionally, pyPaSWAS support the affine gap penalty. Python libraries are used for automated system configuration, I/O and logging. This way, the Python environment will stimulate further extension and use of pyPaSWAS. pyPaSWAS presents an easy Python-based environment for accurate and retrievable parallel SW sequence alignments on GPUs and multi-core systems. The strategy of integrating Python with high-performance parallel compute languages to create a developer- and user-friendly environment should be considered for other computationally intensive bioinformatics algorithms.
Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam
2018-03-11
To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.
Python based high-level synthesis compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard
2014-11-01
This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.
NASA Astrophysics Data System (ADS)
Braun, N.; Hauth, T.; Pulvermacher, C.; Ritter, M.
2017-10-01
Today’s analyses for high-energy physics (HEP) experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts - often written in Python or ROOT macros which call complex compiled algorithms in the background - to perform fitting procedures and generate plots. During recent years interactive programming environments, such as Jupyter, became popular. Jupyter allows to develop Python-based applications, so-called notebooks, which bundle code, documentation and results, e.g. plots. Advantages over classical script-based approaches is the feature to recompute only parts of the analysis code, which allows for fast and iterative development, and a web-based user frontend, which can be hosted centrally and only requires a browser on the user side. In our novel approach, Python and Jupyter are tightly integrated into the Belle II Analysis Software Framework (basf2), currently being developed for the Belle II experiment in Japan. This allows to develop code in Jupyter notebooks for every aspect of the event simulation, reconstruction and analysis chain. These interactive notebooks can be hosted as a centralized web service via jupyterhub with docker and used by all scientists of the Belle II Collaboration. Because of its generality and encapsulation, the setup can easily be scaled to large installations.
SPICE-Based Python Packages for ESA Solar System Exploration Mission's Geometry Exploitation
NASA Astrophysics Data System (ADS)
Costa, M.; Grass, M.
2018-04-01
This contribution outlines three Python packages to provide an enhanced and extended usage of SPICE Toolkit APIS providing higher-level functions and data quick-look capabilities focused on European Space Agency solar system exploration missions.
PESO - The Python Based Control System of the Ondrejov 2m Telescope
NASA Astrophysics Data System (ADS)
Skoda, P.; Fuchs, J.; Honsa, J.
2005-12-01
Python has been gaining a good reputation and respectability in many areas of software development. We have chosen Python after getting the new CCD detector for the coudé spectrograph of Ondřejov observatory 2m telescope. The VersArray detector from Roper Scientific came only with the closed source library PVCAM of low-level camera control functions for Linux, so we had to write the whole astronomical data acquisition system from scratch and integrate it with the current spectrograph and telescope control systems. The final result of our effort, PESO (Python Exposure System for Ondřejov) is a highly comfortable GUI-based environment allowing the observer to change the spectrograph configuration, choose the detector acquisition mode, select the exposure parameters, and monitor the exposure progress. All of the relevant information from the control computers is written into the FITS headers by the PyFITS module, and the acquired CCD frame is immediately displayed in an SAO DS9 window using XPA calls. The GTK-based front end design was drawn in the Glade visual development tool, giving the shape and position of all widgets in single XML file, which is used in Python by a simple call of the PyGlade module. We describe our experience with the design and implementation of PESO, stressing the easiness of quick changes of the GUI, together with the capability of separate testing of every module using the Python debugger, IPython.
Interactive, process-oriented climate modeling with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2016-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The Jupyter Notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields.
PyPathway: Python Package for Biological Network Analysis and Visualization.
Xu, Yang; Luo, Xiao-Chun
2018-05-01
Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.
OMPC: an Open-Source MATLAB®-to-Python Compiler
Jurica, Peter; van Leeuwen, Cees
2008-01-01
Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577
The ATLAS PanDA Monitoring System and its Evolution
NASA Astrophysics Data System (ADS)
Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.
2011-12-01
The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.
The Discovery of XY Sex Chromosomes in a Boa and Python.
Gamble, Tony; Castoe, Todd A; Nielsen, Stuart V; Banks, Jaison L; Card, Daren C; Schield, Drew R; Schuett, Gordon W; Booth, Warren
2017-07-24
For over 50 years, biologists have accepted that all extant snakes share the same ZW sex chromosomes derived from a common ancestor [1-3], with different species exhibiting sex chromosomes at varying stages of differentiation. Accordingly, snakes have been a well-studied model for sex chromosome evolution in animals [1, 4]. A review of the literature, however, reveals no compelling support that boas and pythons possess ZW sex chromosomes [2, 5]. Furthermore, phylogenetic patterns of facultative parthenogenesis in snakes and a sex-linked color mutation in the ball python (Python regius) are best explained by boas and pythons possessing an XY sex chromosome system [6, 7]. Here we demonstrate that a boa (Boa imperator) and python (Python bivittatus) indeed possess XY sex chromosomes, based on the discovery of male-specific genetic markers in both species. We use these markers, along with transcriptomic and genomic data, to identify distinct sex chromosomes in boas and pythons, demonstrating that XY systems evolved independently in each lineage. This discovery highlights the dynamic evolution of vertebrate sex chromosomes and further enhances the value of snakes as a model for studying sex chromosome evolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
PyMOOSE: Interoperable Scripting in Python for MOOSE
Ray, Subhasis; Bhalla, Upinder S.
2008-01-01
Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924
OMPC: an Open-Source MATLAB-to-Python Compiler.
Jurica, Peter; van Leeuwen, Cees
2009-01-01
Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.
Kiefer, Patrick; Schmitt, Uwe; Vorholt, Julia A
2013-04-01
The Python-based, open-source eMZed framework was developed for mass spectrometry (MS) users to create tailored workflows for liquid chromatography (LC)/MS data analysis. The goal was to establish a unique framework with comprehensive basic functionalities that are easy to apply and allow for the extension and modification of the framework in a straightforward manner. eMZed supports the iterative development and prototyping of individual evaluation strategies by providing a computing environment and tools for inspecting and modifying underlying LC/MS data. The framework specifically addresses non-expert programmers, as it requires only basic knowledge of Python and relies largely on existing successful open-source software, e.g. OpenMS. The framework eMZed and its documentation are freely available at http://emzed.biol.ethz.ch/. eMZed is published under the GPL 3.0 license, and an online discussion group is available at https://groups.google.com/group/emzed-users. Supplementary data are available at Bioinformatics online.
pysimm: A Python Package for Simulation of Molecular Systems
NASA Astrophysics Data System (ADS)
Fortunato, Michael; Colina, Coray
pysimm, short for python simulation interface for molecular modeling, is a python package designed to facilitate the structure generation and simulation of molecular systems through convenient and programmatic access to object-oriented representations of molecular system data. This poster presents core features of pysimm and design philosophies that highlight a generalized methodology for incorporation of third-party software packages through API interfaces. The integration with the LAMMPS simulation package is explained to demonstrate this methodology. pysimm began as a back-end python library that powered a cloud-based application on nanohub.org for amorphous polymer simulation. The extension from a specific application library to general purpose simulation interface is explained. Additionally, this poster highlights the rapid development of new applications to construct polymer chains capable of controlling chain morphology such as molecular weight distribution and monomer composition.
MYST: a comprehensive high-level AO control tool for GeMS
NASA Astrophysics Data System (ADS)
Rigaut, F.; Neichel, B.; Bec, M.; Garcia-Rissman, A.
2010-07-01
Myst is the Gemini MCAO System (GeMS) high level control GUI. It is written in yorick, python and C. In this paper, we review the software architecture of Myst and its primary purposes, which are many: Real-time display, high level diagnostics, calibrations, and executor/sequencer of high level actions (closing the loop, coordinating dithers, etc).
SU-E-T-11: A Cloud Based CT and LINAC QA Data Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersma, R; Grelewicz, Z; Belcher, A
Purpose: The current status quo of QA data management consists of a mixture of paper-based forms and spreadsheets for recording the results of daily, monthly, and yearly QA tests for both CT scanners and LINACs. Unfortunately, such systems suffer from a host of problems as, (1) records can be easily lost or destroyed, (2) data is difficult to access — one must physically hunt down records, (3) poor or no means of historical data analysis, and (4) no remote monitoring of machine performance off-site. To address these issues, a cloud based QA data management system was developed and implemented. Methods:more » A responsive tablet interface that optimizes clinic workflow with an easy-to-navigate interface accessible from any web browser was implemented in HTML/javascript/CSS to allow user mobility when entering QA data. Automated image QA was performed using a phantom QA kit developed in Python that is applicable to any phantom and is currently being used with the Gammex ACR, Las Vegas, Leeds, and Catphan phantoms for performing automated CT, MV, kV, and CBCT QAs, respectively. A Python based resource management system was used to distribute and manage intensive CPU tasks such as QA phantom image analysis or LaTeX-to-PDF QA report generation to independent process threads or different servers such that website performance is not affected. Results: To date the cloud QA system has performed approximately 185 QA procedures. Approximately 200 QA parameters are being actively tracked by the system on a monthly basis. Electronic access to historical QA parameter information was successful in proactively identifying a Linac CBCT scanner’s performance degradation. Conclusion: A fully comprehensive cloud based QA data management system was successfully implemented for the first time. Potential machine performance issues were proactively identified that would have been otherwise missed by a paper or spreadsheet based QA system.« less
Status of parallel Python-based implementation of UEDGE
NASA Astrophysics Data System (ADS)
Umansky, M. V.; Pankin, A. Y.; Rognlien, T. D.; Dimits, A. M.; Friedman, A.; Joseph, I.
2017-10-01
The tokamak edge transport code UEDGE has long used the code-development and run-time framework Basis. However, with the support for Basis expected to terminate in the coming years, and with the advent of the modern numerical language Python, it has become desirable to move UEDGE to Python, to ensure its long-term viability. Our new Python-based UEDGE implementation takes advantage of the portable build system developed for FACETS. The new implementation gives access to Python's graphical libraries and numerical packages for pre- and post-processing, and support of HDF5 simplifies exchanging data. The older serial version of UEDGE has used for time-stepping the Newton-Krylov solver NKSOL. The renovated implementation uses backward Euler discretization with nonlinear solvers from PETSc, which has the promise to significantly improve the UEDGE parallel performance. We will report on assessment of some of the extended UEDGE capabilities emerging in the new implementation, and will discuss the future directions. Work performed for U.S. DOE by LLNL under contract DE-AC52-07NA27344.
Providing Web Interfaces to the NSF EarthScope USArray Transportable Array
NASA Astrophysics Data System (ADS)
Vernon, Frank; Newman, Robert; Lindquist, Kent
2010-05-01
Since April 2004 the EarthScope USArray seismic network has grown to over 850 broadband stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. Providing secure, yet open, access to real-time and archived data for a broad range of audiences is best served by a series of platform agnostic low-latency web-based applications. We present a framework of tools that mediate between the world wide web and Boulder Real Time Technologies Antelope Environmental Monitoring System data acquisition and archival software. These tools provide comprehensive information to audiences ranging from network operators and geoscience researchers, to funding agencies and the general public. This ranges from network-wide to station-specific metadata, state-of-health metrics, event detection rates, archival data and dynamic report generation over a station's two year life span. Leveraging open source web-site development frameworks for both the server side (Perl, Python and PHP) and client-side (Flickr, Google Maps/Earth and jQuery) facilitates the development of a robust extensible architecture that can be tailored on a per-user basis, with rapid prototyping and development that adheres to web-standards. Typical seismic data warehouses allow online users to query and download data collected from regional networks, without the scientist directly visually assessing data coverage and/or quality. Using a suite of web-based protocols, we have recently developed an online seismic waveform interface that directly queries and displays data from a relational database through a web-browser. Using the Python interface to Datascope and the Python-based Twisted network package on the server side, and the jQuery Javascript framework on the client side to send and receive asynchronous waveform queries, we display broadband seismic data using the HTML Canvas element that is globally accessible by anyone using a modern web-browser. We are currently creating additional interface tools to create a rich-client interface for accessing and displaying seismic data that can be deployed to any system running the Antelope Real Time System. The software is freely available from the Antelope contributed code Git repository (http://www.antelopeusersgroup.org).
PySeqLab: an open source Python package for sequence labeling and segmentation.
Allam, Ahmed; Krauthammer, Michael
2017-11-01
Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
CyNEST: a maintainable Cython-based interface for the NEST simulator
Zaytsev, Yury V.; Morrison, Abigail
2014-01-01
NEST is a simulator for large-scale networks of spiking point neuron models (Gewaltig and Diesmann, 2007). Originally, simulations were controlled via the Simulation Language Interpreter (SLI), a built-in scripting facility implementing a language derived from PostScript (Adobe Systems, Inc., 1999). The introduction of PyNEST (Eppler et al., 2008), the Python interface for NEST, enabled users to control simulations using Python. As the majority of NEST users found PyNEST easier to use and to combine with other applications, it immediately displaced SLI as the default NEST interface. However, developing and maintaining PyNEST has become increasingly difficult over time. This is partly because adding new features requires writing low-level C++ code intermixed with calls to the Python/C API, which is unrewarding. Moreover, the Python/C API evolves with each new version of Python, which results in a proliferation of version-dependent code branches. In this contribution we present the re-implementation of PyNEST in the Cython language, a superset of Python that additionally supports the declaration of C/C++ types for variables and class attributes, and provides a convenient foreign function interface (FFI) for invoking C/C++ routines (Behnel et al., 2011). Code generation via Cython allows the production of smaller and more maintainable bindings, including increased compatibility with all supported Python releases without additional burden for NEST developers. Furthermore, this novel approach opens up the possibility to support alternative implementations of the Python language at no cost given a functional Cython back-end for the corresponding implementation, and also enables cross-compilation of Python bindings for embedded systems and supercomputers alike. PMID:24672470
Generation of Test Questions from RDF Files Using PYTHON and SPARQL
NASA Astrophysics Data System (ADS)
Omarbekova, Assel; Sharipbay, Altynbek; Barlybaev, Alibek
2017-02-01
This article describes the development of the system for the automatic generation of test questions based on the knowledge base. This work has an applicable nature and provides detailed examples of the development of ontology and implementation the SPARQL queries in RDF-documents. Also it describes implementation of the program generating questions in the Python programming language including the necessary libraries while working with RDF-files.
Wall, Christopher E; Cozza, Steven; Riquelme, Cecilia A; McCombie, W Richard; Heimiller, Joseph K; Marr, Thomas G; Leinwand, Leslie A
2011-01-01
The infrequently feeding Burmese python (Python molurus) experiences significant and rapid postprandial cardiac hypertrophy followed by regression as digestion is completed. To begin to explore the molecular mechanisms of this response, we have sequenced and assembled the fasted and postfed Burmese python heart transcriptomes with Illumina technology using the chicken (Gallus gallus) genome as a reference. In addition, we have used RNA-seq analysis to identify differences in the expression of biological processes and signaling pathways between fasted, 1 day postfed (DPF), and 3 DPF hearts. Out of a combined transcriptome of ∼2,800 mRNAs, 464 genes were differentially expressed. Genes showing differential expression at 1 DPF compared with fasted were enriched for biological processes involved in metabolism and energetics, while genes showing differential expression at 3 DPF compared with fasted were enriched for processes involved in biogenesis, structural remodeling, and organization. Moreover, we present evidence for the activation of physiological and not pathological signaling pathways in this rapid, novel model of cardiac growth in pythons. Together, our data provide the first comprehensive gene expression profile for a reptile heart.
A Multidisciplinary Tool for Systems Analysis of Planetary Entry, Descent, and Landing (SAPE)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2009-01-01
SAPE is a Python-based multidisciplinary analysis tool for systems analysis of planetary entry, descent, and landing (EDL) for Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Titan. The purpose of SAPE is to provide a variable-fidelity capability for conceptual and preliminary analysis within the same framework. SAPE includes the following analysis modules: geometry, trajectory, aerodynamics, aerothermal, thermal protection system, and structural sizing. SAPE uses the Python language-a platform-independent open-source software for integration and for the user interface. The development has relied heavily on the object-oriented programming capabilities that are available in Python. Modules are provided to interface with commercial and government off-the-shelf software components (e.g., thermal protection systems and finite-element analysis). SAPE runs on Microsoft Windows and Apple Mac OS X and has been partially tested on Linux.
A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling
NASA Astrophysics Data System (ADS)
Jaxa-Rozen, M.
2016-12-01
The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).
Jensen, Bjarke; Nyengaard, Jens R; Pedersen, Michael; Wang, Tobias
2010-12-01
The hearts of all snakes and lizards consist of two atria and a single incompletely divided ventricle. In general, the squamate ventricle is subdivided into three chambers: cavum arteriosum (left), cavum venosum (medial) and cavum pulmonale (right). Although a similar division also applies to the heart of pythons, this family of snakes is unique amongst snakes in having intracardiac pressure separation. Here we provide a detailed anatomical description of the cardiac structures that confer this functional division. We measured the masses and volumes of the ventricular chambers, and we describe the gross morphology based on dissections of the heart from 13 ball pythons (Python regius) and one Burmese python (P. molurus). The cavum venosum is much reduced in pythons and constitutes approximately 10% of the cavum arteriosum. We suggest that shunts will always be less than 20%, while other studies conclude up to 50%. The high-pressure cavum arteriosum accounted for approximately 75% of the total ventricular mass, and was twice as dense as the low-pressure cavum pulmonale. The reptile ventricle has a core of spongious myocardium, but the three ventricular septa that separate the pulmonary and systemic chambers--the muscular ridge, the bulbuslamelle and the vertical septum--all had layers of compact myocardium. Pythons, however, have unique pads of connective tissue on the site of pressure separation. Because the hearts of varanid lizards, which also are endowed with pressure separation, share many of these morphological specializations, we propose that intraventricular compact myocardium is an indicator of high-pressure systems and possibly pressure separation.
Statistical Learning Analysis in Neuroscience: Aiming for Transparency
Hanke, Michael; Halchenko, Yaroslav O.; Haxby, James V.; Pollmann, Stefan
2009-01-01
Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires “neuroscience-aware” technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities. PMID:20582270
Python based integration of GEM detector electronics with JET data acquisition system
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dalley, Simon; Hogben, Colin; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek; Shumack, Amy
2014-11-01
This paper presents the system integrating the dedicated measurement and control electronic systems for Gas Electron Multiplier (GEM) detectors with the Control and Data Acquisition system (CODAS) in the JET facility in Culham, England. The presented system performs the high level procedures necessary to calibrate the GEM detector and to protect it against possible malfunctions or dangerous changes in operating conditions. The system also allows control of the GEM detectors from CODAS, setting of their parameters, checking their state, starting the plasma measurement and to reading the results. The system has been implemented using the Python language, using the advanced libraries for implementation of network communication protocols, for object based hardware management and for data processing.
The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data
NASA Astrophysics Data System (ADS)
Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex
2017-06-01
The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.
Pycellerator: an arrow-based reaction-like modelling language for biological simulations.
Shapiro, Bruce E; Mjolsness, Eric
2016-02-15
We introduce Pycellerator, a Python library for reading Cellerator arrow notation from standard text files, conversion to differential equations, generating stand-alone Python solvers, and optionally running and plotting the solutions. All of the original Cellerator arrows, which represent reactions ranging from mass action, Michales-Menten-Henri (MMH) and Gene-Regulation (GRN) to Monod-Wyman-Changeaux (MWC), user defined reactions and enzymatic expansions (KMech), were previously represented with the Mathematica extended character set. These are now typed as reaction-like commands in ASCII text files that are read by Pycellerator, which includes a Python command line interface (CLI), a Python application programming interface (API) and an iPython notebook interface. Cellerator reaction arrows are now input in text files. The arrows are parsed by Pycellerator and translated into differential equations in Python, and Python code is automatically generated to solve the system. Time courses are produced by executing the auto-generated Python code. Users have full freedom to modify the solver and utilize the complete set of standard Python tools. The new libraries are completely independent of the old Cellerator software and do not require Mathematica. All software is available (GPL) from the github repository at https://github.com/biomathman/pycellerator/releases. Details, including installation instructions and a glossary of acronyms and terms, are given in the Supplementary information. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ARCHANGEL: Galaxy Photometry System
NASA Astrophysics Data System (ADS)
Schombert, James
2011-07-01
ARCHANGEL is a Unix-based package for the surface photometry of galaxies. While oriented for large angular size systems (i.e. many pixels), its tools can be applied to any imaging data of any size. The package core contains routines to perform the following critical galaxy photometry functions: sky determination; frame cleaning; ellipse fitting; profile fitting; and total and isophotal magnitudes. The goal of the package is to provide an automated, assembly-line type of reduction system for galaxy photometry of space-based or ground-based imaging data. The procedures outlined in the documentation are flux independent, thus, these routines can be used for non-optical data as well as typical imaging datasets. ARCHANGEL has been tested on several current OS's (RedHat Linux, Ubuntu Linux, Solaris, Mac OS X). A tarball for installation is available at the download page. The main routines are Python and FORTRAN based, therefore, a current installation of Python and a FORTRAN compiler are required. The ARCHANGEL package also contains Python hooks to the PGPLOT package, an XML processor and network tools which automatically link to data archives (i.e. NED, HST, 2MASS, etc) to download images in a non-interactive manner.
GPAW - massively parallel electronic structure calculations with Python-based software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enkovaara, J.; Romero, N.; Shende, S.
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used thismore » approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.« less
Hart, Reece K; Rico, Rudolph; Hare, Emily; Garcia, John; Westbrook, Jody; Fusaro, Vincent A
2015-01-15
Biological sequence variants are commonly represented in scientific literature, clinical reports and databases of variation using the mutation nomenclature guidelines endorsed by the Human Genome Variation Society (HGVS). Despite the widespread use of the standard, no freely available and comprehensive programming libraries are available. Here we report an open-source and easy-to-use Python library that facilitates the parsing, manipulation, formatting and validation of variants according to the HGVS specification. The current implementation focuses on the subset of the HGVS recommendations that precisely describe sequence-level variation relevant to the application of high-throughput sequencing to clinical diagnostics. The package is released under the Apache 2.0 open-source license. Source code, documentation and issue tracking are available at http://bitbucket.org/hgvs/hgvs/. Python packages are available at PyPI (https://pypi.python.org/pypi/hgvs). Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Hart, Reece K.; Rico, Rudolph; Hare, Emily; Garcia, John; Westbrook, Jody; Fusaro, Vincent A.
2015-01-01
Summary: Biological sequence variants are commonly represented in scientific literature, clinical reports and databases of variation using the mutation nomenclature guidelines endorsed by the Human Genome Variation Society (HGVS). Despite the widespread use of the standard, no freely available and comprehensive programming libraries are available. Here we report an open-source and easy-to-use Python library that facilitates the parsing, manipulation, formatting and validation of variants according to the HGVS specification. The current implementation focuses on the subset of the HGVS recommendations that precisely describe sequence-level variation relevant to the application of high-throughput sequencing to clinical diagnostics. Availability and implementation: The package is released under the Apache 2.0 open-source license. Source code, documentation and issue tracking are available at http://bitbucket.org/hgvs/hgvs/. Python packages are available at PyPI (https://pypi.python.org/pypi/hgvs). Contact: reecehart@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25273102
Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.
Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D
2016-12-01
Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.
Note: Tormenta: An open source Python-powered control software for camera based optical microscopy
NASA Astrophysics Data System (ADS)
Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.
2016-12-01
Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.
NASA Technical Reports Server (NTRS)
Lang, Timothy J.
2015-01-01
At NASA Marshall Space Flight Center (MSFC), Python is used several different ways to analyze and visualize precipitating weather systems. A number of different Python-based software packages have been developed, which are available to the larger scientific community. The approach in all these packages is to utilize pre-existing Python modules as well as to be object-oriented and scalable. The first package that will be described and demonstrated is the Python Advanced Microwave Precipitation Radiometer (AMPR) Data Toolkit, or PyAMPR for short. PyAMPR reads geolocated brightness temperature data from any flight of the AMPR airborne instrument over its 25-year history into a common data structure suitable for user-defined analyses. It features rapid, simplified (i.e., one line of code) production of quick-look imagery, including Google Earth overlays, swath plots of individual channels, and strip charts showing multiple channels at once. These plotting routines are also capable of significant customization for detailed, publication-ready figures. Deconvolution of the polarization-varying channels to static horizontally and vertically polarized scenes is also available. Examples will be given of PyAMPR's contribution toward real-time AMPR data display during the Integrated Precipitation and Hydrology Experiment (IPHEx), which took place in the Carolinas during May-June 2014. The second software package is the Marshall Multi-Radar/Multi-Sensor (MRMS) Mosaic Python Toolkit, or MMM-Py for short. MMM-Py was designed to read, analyze, and display three-dimensional national mosaicked reflectivity data produced by the NOAA National Severe Storms Laboratory (NSSL). MMM-Py can read MRMS mosaics from either their unique binary format or their converted NetCDF format. It can also read and properly interpret the current mosaic design (4 regional tiles) as well as mosaics produced prior to late July 2013 (8 tiles). MMM-Py can easily stitch multiple tiles together to provide a larger regional or national picture of precipitating weather systems. Composites, horizontal and vertical crosssections, and combinations thereof are easily displayed using as little as one line of code. MMM-Py can also write to the native MRMS binary format, and sub-sectioning of tiles (or multiple stitched tiles) is anticipated to be in place by the time of this meeting. Thus, MMM-Py also can be used to power the creation of custom mosaics for targeted regional studies. Overlays of other data (e.g., lightning observations) are easily accomplished. Demonstrations of MMM-Py, including the creation of animations, will be shown. Finally, Marshall has done significant work to interface Python-based analysis routines with the U.S. Department of Energy's Py-ART software package for radar data ingest, processing, and analysis. One example of this is the Python Turbulence Detection Algorithm (PyTDA), an MSFC-based implementation of the National Center for Atmospheric Research (NCAR) Turbulence Detection Algorithm (NTDA) for the purposes of convective-scale analysis, situational awareness, and forensic meteorology. PyTDA exploits Py-ART's radar data ingest routines and data model to rapidly produce aviation-relevant turbulence estimates from Doppler radar data. Work toward processing speed optimization and better integration within the Py-ART framework will be highlighted. Python-based analysis within the Py-ART framework is also being done for new research related to intercomparison of ground-based radar data with satellite estimates of ocean winds, as well as research on the electrification of pyrocumulus clouds.
Endocardial fibrosarcoma in a reticulated python (Python reticularis).
Gumber, Sanjeev; Nevarez, Javier G; Cho, Doo-Youn
2010-11-01
A female, reticulated python (Python reticularis) of unknown age was presented with a history of lethargy, weakness, and distended coelom. Physical examination revealed severe dystocia and stomatitis. The reticulated python was euthanized due to a poor clinical prognosis. Postmortem examination revealed marked distention of the reproductive tract with 26 eggs (10-12 cm in diameter), pericardial effusion, and a slightly firm, pale tan mass (3-4 cm in diameter) adhered to the endocardium at the base of aorta. Based on histopathologic and transmission electron microscopic findings, the diagnosis of endocardial fibrosarcoma was made.
Land Ice Verification and Validation Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-07-15
To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less
Measuring an artificial intelligence system's performance on a Verbal IQ test for young children
NASA Astrophysics Data System (ADS)
Ohlsson, Stellan; Sloan, Robert H.; Turán, György; Urasky, Aaron
2017-07-01
We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 artificial intelligence (AI) system. The test questions (e.g. "Why do we shake hands?") were translated into ConceptNet 4 inputs using a combination of the simple natural language processing tools that come with ConceptNet together with short Python programs that we wrote. The question answering used a version of ConceptNet based on spectral methods. The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5-7 year olds. Large variations among subtests indicate potential areas of improvement. In particular, results were strongest for the Vocabulary and Similarities subtests, intermediate for the Information subtest and lowest for the Comprehension and Word Reasoning subtests. Comprehension is the subtest most strongly associated with common sense. The large variations among subtests and ordinary common sense strongly suggest that the WPPSI-III VIQ results do not show that "ConceptNet has the verbal abilities of a four-year-old". Rather, children's IQ tests offer one objective metric for the evaluation and comparison of AI systems. Also, this work continues previous research on psychometric AI.
Comparison of cyclic correlation algorithm implemented in matlab and python
NASA Astrophysics Data System (ADS)
Carr, Richard; Whitney, James
Simulation is a necessary step for all engineering projects. Simulation gives the engineers an approximation of how their devices will perform under different circumstances, without hav-ing to build, or before building a physical prototype. This is especially true for space bound devices, i.e., space communication systems, where the impact of system malfunction or failure is several orders of magnitude over that of terrestrial applications. Therefore having a reliable simulation tool is key in developing these devices and systems. Math Works Matrix Laboratory (MATLAB) is a matrix based software used by scientists and engineers to solve problems and perform complex simulations. MATLAB has a number of applications in a wide variety of fields which include communications, signal processing, image processing, mathematics, eco-nomics and physics. Because of its many uses MATLAB has become the preferred software for many engineers; it is also very expensive, especially for students and startups. One alternative to MATLAB is Python. The Python is a powerful, easy to use, open source programming environment that can be used to perform many of the same functions as MATLAB. Python programming environment has been steadily gaining popularity in niche programming circles. While there are not as many function included in the software as MATLAB, there are many open source functions that have been developed that are available to be downloaded for free. This paper illustrates how Python can implement the cyclic correlation algorithm and com-pares the results to the cyclic correlation algorithm implemented in the MATLAB environment. Some of the characteristics to be compared are the accuracy and precision of the results, and the length of the programs. The paper will demonstrate that Python is capable of performing simulations of complex algorithms such cyclic correlation.
Analyzing rasters, vectors and time series using new Python interfaces in GRASS GIS 7
NASA Astrophysics Data System (ADS)
Petras, Vaclav; Petrasova, Anna; Chemin, Yann; Zambelli, Pietro; Landa, Martin; Gebbert, Sören; Neteler, Markus; Löwe, Peter
2015-04-01
GRASS GIS 7 is a free and open source GIS software developed and used by many scientists (Neteler et al., 2012). While some users of GRASS GIS prefer its graphical user interface, significant part of the scientific community takes advantage of various scripting and programing interfaces offered by GRASS GIS to develop new models and algorithms. Here we will present different interfaces added to GRASS GIS 7 and available in Python, a popular programming language and environment in geosciences. These Python interfaces are designed to satisfy the needs of scientists and programmers under various circumstances. PyGRASS (Zambelli et al., 2013) is a new object-oriented interface to GRASS GIS modules and libraries. The GRASS GIS libraries are implemented in C to ensure maximum performance and the PyGRASS interface provides an intuitive, pythonic access to their functionality. GRASS GIS Python scripting library is another way of accessing GRASS GIS modules. It combines the simplicity of Bash and the efficiency of the Python syntax. When full access to all low-level and advanced functions and structures from GRASS GIS library is required, Python programmers can use an interface based on the Python ctypes package. Ctypes interface provides complete, direct access to all functionality as it would be available to C programmers. GRASS GIS provides specialized Python library for managing and analyzing spatio-temporal data (Gebbert and Pebesma, 2014). The temporal library introduces space time datasets representing time series of raster, 3D raster or vector maps and allows users to combine various spatio-temporal operations including queries, aggregation, sampling or the analysis of spatio-temporal topology. We will also discuss the advantages of implementing scientific algorithm as a GRASS GIS module and we will show how to write such module in Python. To facilitate the development of the module, GRASS GIS provides a Python library for testing (Petras and Gebbert, 2014) which helps researchers to ensure the robustness of the algorithm, correctness of the results in edge cases as well as the detection of changes in results due to new development. For all modules GRASS GIS automatically creates standardized command line and graphical user interfaces and documentation. Finally, we will show how GRASS GIS can be used together with powerful Python tools such as the NumPy package and the IPython Notebook. References: Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environmental Modelling & Software 53, 1-12. Neteler, M., Bowman, M.H., Landa, M. and Metz, M., 2012. GRASS GIS: a multi-purpose Open Source GIS. Environmental Modelling & Software 31: 124-130. Petras, V., Gebbert, S., 2014. Testing framework for GRASS GIS: ensuring reproducibility of scientific geospatial computing. Poster presented at: AGU Fall Meeting, December 15-19, 2014, San Francisco, USA. Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS International Journal of Geo-Information 2, 201-219.
Hemodynamic consequences of cardiac malformations in two juvenile ball pythons (Python regius).
Jensen, Bjarke; Wang, Tobias
2009-12-01
Two cases of bifid ventricles and cardiac malformations in juvenile ball python (Python regius) were investigated by blood pressure measurements and macro- and microscopic sectioning. A study of a normal ball python was included for reference. In both cases, all cardiac chambers were enlarged and abnormally shaped. Internal assessment of the ventricles revealed a pronounced defect of the muscular ridge, which normally is responsible for separating the systemic and pulmonary circuits. Consistent with the small muscular ridge, systolic pressures were identical in the pulmonary and systemic arteries, but, the snakes, nevertheless, lived to reach body weights severalfold of their hatchling weight.
pyBSM: A Python package for modeling imaging systems
NASA Astrophysics Data System (ADS)
LeMaster, Daniel A.; Eismann, Michael T.
2017-05-01
There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.
Personalization of structural PDB files.
Woźniak, Tomasz; Adamiak, Ryszard W
2013-01-01
PDB format is most commonly applied by various programs to define three-dimensional structure of biomolecules. However, the programs often use different versions of the format. Thus far, no comprehensive solution for unifying the PDB formats has been developed. Here we present an open-source, Python-based tool called PDBinout for processing and conversion of various versions of PDB file format for biostructural applications. Moreover, PDBinout allows to create one's own PDB versions. PDBinout is freely available under the LGPL licence at http://pdbinout.ibch.poznan.pl.
Py4Syn: Python for synchrotrons.
Slepicka, H H; Canova, H F; Beniz, D B; Piton, J R
2015-09-01
In this report, Py4Syn, an open-source Python-based library for data acquisition, device manipulation, scan routines and other helper functions, is presented. Driven by easy-to-use and scalability ideals, Py4Syn offers control system agnostic solution and high customization level for scans and data output, covering distinct techniques and facilities. Here, most of the library functionalities are described, examples of use are shown and ideas for future implementations are presented.
A computational approach to climate science education with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2017-12-01
CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format
SpacePy - a Python-based library of tools for the space sciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven K; Welling, Daniel T; Koller, Josef
Space science deals with the bodies within the solar system and the interplanetary medium; the primary focus is on atmospheres and above - at Earth the short timescale variation in the the geomagnetic field, the Van Allen radiation belts and the deposition of energy into the upper atmosphere are key areas of investigation. SpacePy is a package for Python, targeted at the space sciences, that aims to make basic data analysis, modeling and visualization easier. It builds on the capabilities of the well-known NumPy and MatPlotLib packages. Publication quality output direct from analyses is emphasized. The SpacePy project seeks tomore » promote accurate and open research standards by providing an open environment for code development. In the space physics community there has long been a significant reliance on proprietary languages that restrict free transfer of data and reproducibility of results. By providing a comprehensive, open-source library of widely used analysis and visualization tools in a free, modern and intuitive language, we hope that this reliance will be diminished. SpacePy includes implementations of widely used empirical models, statistical techniques used frequently in space science (e.g. superposed epoch analysis), and interfaces to advanced tools such as electron drift shell calculations for radiation belt studies. SpacePy also provides analysis and visualization tools for components of the Space Weather Modeling Framework - currently this only includes the BATS-R-US 3-D magnetohydrodynamic model and the RAM ring current model - including streamline tracing in vector fields. Further development is currently underway. External libraries, which include well-known magnetic field models, high-precision time conversions and coordinate transformations are wrapped for access from Python using SWIG and f2py. The rest of the tools have been implemented directly in Python. The provision of open-source tools to perform common tasks will provide openness in the analysis methods employed in scientific studies and will give access to advanced tools to all space scientists regardless of affiliation or circumstance.« less
Flexible Environments for Grand-Challenge Simulation in Climate Science
NASA Astrophysics Data System (ADS)
Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.
2004-12-01
Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.
PyMCT: A Very High Level Language Coupling Tool For Climate System Models
NASA Astrophysics Data System (ADS)
Tobis, M.; Pierrehumbert, R. T.; Steder, M.; Jacob, R. L.
2006-12-01
At the Climate Systems Center of the University of Chicago, we have been examining strategies for applying agile programming techniques to complex high-performance modeling experiments. While the "agile" development methodology differs from a conventional requirements process and its associated milestones, the process remain a formal one. It is distinguished by continuous improvement in functionality, large numbers of small releases, extensive and ongoing testing strategies, and a strong reliance on very high level languages (VHLL). Here we report on PyMCT, which we intend as a core element in a model ensemble control superstructure. PyMCT is a set of Python bindings for MCT, the Fortran-90 based Model Coupling Toolkit, which forms the infrastructure for the inter-component communication in the Community Climate System Model (CCSM). MCT provides a scalable model communication infrastructure. In order to take maximum advantage of agile software development methodologies, we exposed MCT functionality to Python, a prominent VHLL. We describe how the scalable architecture of MCT allows us to overcome the relatively weak runtime performance of Python, so that the performance of the combined system is not severely impacted. To demonstrate these advantages, we reimplemented the CCSM coupler in Python. While this alone offers no new functionality, it does provide a rigorous test of PyMCT functionality and performance. We reimplemented the CPL6 library, presenting an interesting case study of the comparison between conventional Fortran-90 programming and the higher abstraction level provided by a VHLL. The powerful abstractions provided by Python will allow much more complex experimental paradigms. In particular, we hope to build on the scriptability of our coupling strategy to enable systematic sensitivity tests. Our most ambitious objective is to combine our efforts with Bayesian inverse modeling techniques toward objective tuning at the highest level, across model architectures.
Naval Observatory Vector Astrometry Software (NOVAS) Version 3.1:Fortran, C, and Python Editions
NASA Astrophysics Data System (ADS)
Kaplan, G. H.; Bangert, J. A.; Barron, E. G.; Bartlett, J. L.; Puatua, W.; Harris, W.; Barrett, P.
2012-08-01
The Naval Observatory Vector Astrometry Software (NOVAS) is a source - code library that provides common astrometric quantities and transformations to high precision. The library can supply, in one or two subroutine or function calls, the instantaneous celestial position of any star or planet in a variety of coordinate systems. NOVAS also provides access to all of the building blocks that go into such computations. NOVAS is used for a wide variety of applications, including the U.S. portions of The Astronomical Almanac and a number of telescope control systems. NOVAS uses IAU recommended models for Earth orientation, including the IAU 2006 precession theory, the IAU 2000A and 2000B nutation series, and diurnal rotation based on the celestial and terrestrial intermediate origins. Equinox - based quantities, such as sidereal time, are also supported. NOVAS Earth orientation calculations match those from SOFA at the sub - microarcsecond level for comparable transformations. NOVAS algorithms for aberration an d gravitational light deflection are equivalent, at the microarcsecond level, to those inherent in the current consensus VLBI delay algorithm. NOVAS can be easily connected to the JPL planetary/lunar ephemerides (e.g., DE405), and connections to IMCCE and IAA planetary ephemerides are planned. NOVAS Version 3.1 introduces a Python edition alongside the Fortran and C editions. The Python edition uses the computational code from the C edition and currently mimics the function calls of the C edition. Future versions will expand the functionality of the Python edition to exploit the object - oriented features of Python. In the Version 3.1 C edition, the ephemeris - access functions have been revised for use on 64 - bit systems and for improved performance in general. NOVAS source code, auxiliary files, and documentation are available from the USNO website (http://aa.usno.navy.mil/software/novas/novas_info.php).
STSE: Spatio-Temporal Simulation Environment Dedicated to Biology.
Stoma, Szymon; Fröhlich, Martina; Gerber, Susanne; Klipp, Edda
2011-04-28
Recently, the availability of high-resolution microscopy together with the advancements in the development of biomarkers as reporters of biomolecular interactions increased the importance of imaging methods in molecular cell biology. These techniques enable the investigation of cellular characteristics like volume, size and geometry as well as volume and geometry of intracellular compartments, and the amount of existing proteins in a spatially resolved manner. Such detailed investigations opened up many new areas of research in the study of spatial, complex and dynamic cellular systems. One of the crucial challenges for the study of such systems is the design of a well stuctured and optimized workflow to provide a systematic and efficient hypothesis verification. Computer Science can efficiently address this task by providing software that facilitates handling, analysis, and evaluation of biological data to the benefit of experimenters and modelers. The Spatio-Temporal Simulation Environment (STSE) is a set of open-source tools provided to conduct spatio-temporal simulations in discrete structures based on microscopy images. The framework contains modules to digitize, represent, analyze, and mathematically model spatial distributions of biochemical species. Graphical user interface (GUI) tools provided with the software enable meshing of the simulation space based on the Voronoi concept. In addition, it supports to automatically acquire spatial information to the mesh from the images based on pixel luminosity (e.g. corresponding to molecular levels from microscopy images). STSE is freely available either as a stand-alone version or included in the linux live distribution Systems Biology Operational Software (SB.OS) and can be downloaded from http://www.stse-software.org/. The Python source code as well as a comprehensive user manual and video tutorials are also offered to the research community. We discuss main concepts of the STSE design and workflow. We demonstrate it's usefulness using the example of a signaling cascade leading to formation of a morphological gradient of Fus3 within the cytoplasm of the mating yeast cell Saccharomyces cerevisiae. STSE is an efficient and powerful novel platform, designed for computational handling and evaluation of microscopic images. It allows for an uninterrupted workflow including digitization, representation, analysis, and mathematical modeling. By providing the means to relate the simulation to the image data it allows for systematic, image driven model validation or rejection. STSE can be scripted and extended using the Python language. STSE should be considered rather as an API together with workflow guidelines and a collection of GUI tools than a stand alone application. The priority of the project is to provide an easy and intuitive way of extending and customizing software using the Python language.
ERDC MSRC Resource. High Performance Computing for the Warfighter. Fall 2006
2006-01-01
to as Aggregated Combat Modeling, putting us at the campaign level).” Incorporating UIT within DAC The DAC system is written in Python and uses...API calls with two Python classes, UITConnectionFactory and UITConnection. UITConnectionFactory supports Kerberos authentication and establishes a...API calls within these Python classes, we insulated the DAC code from the Python SOAP interface requirements and details of the ERDC MSRC Resource
Python-Based Applications for Hydrogeological Modeling
NASA Astrophysics Data System (ADS)
Khambhammettu, P.
2013-12-01
Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allevato, Adam
2016-07-21
ROSSTEP is a system for sequentially running roslaunch, rosnode, and bash scripts automatically, for use in Robot Operating System (ROS) applications. The system consists of YAML files which define actions and conditions. A python file parses the code and runs actions sequentially using the sys and subprocess python modules. Between actions, it uses various ROS-based code to check conditions required to proceed, and only moves on to the next action when all the necessary conditions have been met. Included is rosstep-creator, a QT application designed to create the YAML files required for ROSSTEP. It has a nearly one-to-one mapping frommore » interface elements to YAML output, and serves as a convenient GUI for working with the ROSSTEP system.« less
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...
2017-02-23
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
Python Scripts for Automation of Current-Voltage Testing of Semiconductor Devices (FY17)
2017-01-01
ARL-TR-7923 ● JAN 2017 US Army Research Laboratory Python Scripts for Automation of Current- Voltage Testing of Semiconductor...manual device-testing procedures is reduced or eliminated through automation. This technical report includes scripts written in Python , version 2.7, used ...nothing. 3.1.9 Exit Program The script exits the entire program. Line 505, sys.exit(), uses the sys package that comes with Python to exit system
Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)
2008-03-01
4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python
PyXRF: Python-based X-ray fluorescence analysis package
NASA Astrophysics Data System (ADS)
Li, Li; Yan, Hanfei; Xu, Wei; Yu, Dantong; Heroux, Annie; Lee, Wah-Keat; Campbell, Stuart I.; Chu, Yong S.
2017-09-01
We developed a python-based fluorescence analysis package (PyXRF) at the National Synchrotron Light Source II (NSLS-II) for the X-ray fluorescence-microscopy beamlines, including Hard X-ray Nanoprobe (HXN), and Submicron Resolution X-ray Spectroscopy (SRX). This package contains a high-level fitting engine, a comprehensive commandline/ GUI design, rigorous physics calculations, and a visualization interface. PyXRF offers a method of automatically finding elements, so that users do not need to spend extra time selecting elements manually. Moreover, PyXRF provides a convenient and interactive way of adjusting fitting parameters with physical constraints. This will help us perform quantitative analysis, and find an appropriate initial guess for fitting. Furthermore, we also create an advanced mode for expert users to construct their own fitting strategies with a full control of each fitting parameter. PyXRF runs single-pixel fitting at a fast speed, which opens up the possibilities of viewing the results of fitting in real time during experiments. A convenient I/O interface was designed to obtain data directly from NSLS-II's experimental database. PyXRF is under open-source development and designed to be an integral part of NSLS-II's scientific computation library.
Emerge - A Python environment for the modeling of subsurface transfers
NASA Astrophysics Data System (ADS)
Lopez, S.; Smai, F.; Sochala, P.
2014-12-01
The simulation of subsurface mass and energy transfers often relies on specific codes that were mainly developed using compiled languages which usually ensure computational efficiency at the expense of relatively long development times and relatively rigid software. Even if a very detailed, possibly graphical, user-interface is developed the core numerical aspects are rarely accessible and the smallest modification will always need a compilation step. Thus, user-defined physical laws or alternative numerical schemes may be relatively difficult to use. Over the last decade, Python has emerged as a popular and widely used language in the scientific community. There already exist several libraries for the pre and post-treatment of input and output files for reservoir simulators (e.g. pytough). Development times in Python are considerably reduced compared to compiled languages, and programs can be easily interfaced with libraries written in compiled languages with several comprehensive numerical libraries that provide sequential and parallel solvers (e.g. PETSc, Trilinos…). The core objective of the Emerge project is to explore the possibility to develop a modeling environment in full Python. Consequently, we are developing an open python package with the classes/objects necessary to express, discretize and solve the physical problems encountered in the modeling of subsurface transfers. We heavily relied on Python to have a convenient and concise way of manipulating potentially complex concepts with a few lines of code and a high level of abstraction. Our result aims to be a friendly numerical environment targeting both numerical engineers and physicist or geoscientists with the possibility to quickly specify and handle geometries, arbitrary meshes, spatially or temporally varying properties, PDE formulations, boundary conditions…
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less
Secor, Stephen M; Taylor, Josi R; Grosell, Martin
2012-01-01
Snakes exhibit an apparent dichotomy in the regulation of gastrointestinal (GI) performance with feeding and fasting; frequently feeding species modestly regulate intestinal function whereas infrequently feeding species rapidly upregulate and downregulate intestinal function with the start and completion of each meal, respectively. The downregulatory response with fasting for infrequently feeding snakes is hypothesized to be a selective attribute that reduces energy expenditure between meals. To ascertain the links between feeding habit, whole-animal metabolism, and GI function and metabolism, we measured preprandial and postprandial metabolic rates and gastric and intestinal acid-base secretion, epithelial conductance and oxygen consumption for the frequently feeding diamondback water snake (Nerodia rhombifer) and the infrequently feeding Burmese python (Python molurus). Independent of body mass, Burmese pythons possess a significantly lower standard metabolic rate and respond to feeding with a much larger metabolic response compared with water snakes. While fasting, pythons cease gastric acid and intestinal base secretion, both of which are stimulated with feeding. In contrast, fasted water snakes secreted gastric acid and intestinal base at rates similar to those of digesting snakes. We observed no difference between fasted and fed individuals for either species in gastric or intestinal transepithelial potential and conductance, with the exception of a significantly greater gastric transepithelial potential for fed pythons at the start of titration. Water snakes experienced no significant change in gastric or intestinal metabolism with feeding. Fed pythons, in contrast, experienced a near-doubling of gastric metabolism and a tripling of intestinal metabolic rate. For fasted individuals, the metabolic rate of the stomach and small intestine was significantly lower for pythons than for water snakes. The fasting downregulation of digestive function for pythons is manifested in a depressed gastric and intestinal metabolism, which selectively serves to reduce basal metabolism and hence promote survival between infrequent meals. By maintaining elevated GI performance between meals, fasted water snakes incur the additional cost of tissue activity, which is expressed in a higher standard metabolic rate.
Climate Model Diagnostic Analyzer Web Service System
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.
2013-12-01
The latest Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with newly available global observations. The traditional approach to climate model evaluation, which compares a single parameter at a time, identifies symptomatic model biases and errors but fails to diagnose the model problems. The model diagnosis process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. To address these challenges, we are developing a parallel, distributed web-service system that enables the physics-based multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation and (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, and (4) the calculation of difference between two variables. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA is planned to be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. The requirements of the educational tool are defined with the interaction with the school organizers, and CMDA is customized to meet the requirements accordingly. The tool needs to be production quality for 30+ simultaneous users. The summer school will thus serve as a valuable testbed for the tool development, preparing CMDA to serve the Earth-science modeling and model-analysis community at the end of the project. This work was funded by the NASA Earth Science Program called Computational Modeling Algorithms and Cyberinfrastructure (CMAC).
Betrayal: radio-tagged Burmese pythons reveal locations of conspecifics in Everglades National Park
Smith, Brian J.; Cherkiss, Michael S.; Hart, Kristen M.; Rochford, Michael R.; Selby, Thomas H.; Snow, Ray W; Mazzotti, Frank J.
2016-01-01
The “Judas” technique is based on the idea that a radio-tagged individual can be used to “betray” conspecifics during the course of its routine social behavior. The Burmese python (Python bivittatus) is an invasive constrictor in southern Florida, and few methods are available for its control. Pythons are normally solitary, but from December–April in southern Florida, they form breeding aggregations containing up to 8 individuals, providing an opportunity to apply the technique. We radio-tracked 25 individual adult pythons of both sexes during the breeding season from 2007–2012. Our goals were to (1) characterize python movements and determine habitat selection for betrayal events, (2) quantify betrayal rates of Judas pythons, and (3) compare the efficacy of this tool with current tools for capturing pythons, both in terms of cost per python removed (CPP) and catch per unit effort (CPUE). In a total of 33 python-seasons, we had 8 betrayal events (24 %) in which a Judas python led us to new pythons. Betrayal events occurred more frequently in lowland forest (including tree islands) than would be expected by chance alone. These 8 events resulted in the capture of 14 new individuals (1–4 new pythons per event). Our effort comparison shows that while the Judas technique is more costly than road cruising surveys per python removed, the Judas technique yields more large, reproductive females and is effective at a time of year that road cruising is not, making it a potential complement to the status quo removal effort.
Charming Users into Scripting CIAO with Python
NASA Astrophysics Data System (ADS)
Burke, D. J.
2011-07-01
The Science Data Systems group of the Chandra X-ray Center provides a number of scripts and Python modules that extend the capabilities of CIAO. Experience in converting the existing scripts—written in a variety of languages such as bash, csh/tcsh, Perl and S-Lang—to Python, and conversations with users, led to the development of the ciao_contrib.runtool module. This allows users to easily run CIAO tools from Python scripts, and utilizes the metadata provided by the parameter-file system to create an API that provides the flexibility and safety guarantees of the command-line. The module is provided to the user community and is being used within our group to create new scripts.
Goloborodko, Anton A; Levitsky, Lev I; Ivanov, Mark V; Gorshkov, Mikhail V
2013-02-01
Pyteomics is a cross-platform, open-source Python library providing a rich set of tools for MS-based proteomics. It provides modules for reading LC-MS/MS data, search engine output, protein sequence databases, theoretical prediction of retention times, electrochemical properties of polypeptides, mass and m/z calculations, and sequence parsing. Pyteomics is available under Apache license; release versions are available at the Python Package Index http://pypi.python.org/pyteomics, the source code repository at http://hg.theorchromo.ru/pyteomics, documentation at http://packages.python.org/pyteomics. Pyteomics.biolccc documentation is available at http://packages.python.org/pyteomics.biolccc/. Questions on installation and usage can be addressed to pyteomics mailing list: pyteomics@googlegroups.com.
pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.
Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars
2014-01-01
pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Reward-Based Behavioral Platform to Measure Neural Activity during Head-Fixed Behavior.
Micallef, Andrew H; Takahashi, Naoya; Larkum, Matthew E; Palmer, Lucy M
2017-01-01
Understanding the neural computations that contribute to behavior requires recording from neurons while an animal is behaving. This is not an easy task as most subcellular recording techniques require absolute head stability. The Go/No-Go sensory task is a powerful decision-driven task that enables an animal to report a binary decision during head-fixation. Here we discuss how to set up an Ardunio and Python based platform system to control a Go/No-Go sensory behavior paradigm. Using an Arduino micro-controller and Python-based custom written program, a reward can be delivered to the animal depending on the decision reported. We discuss the various components required to build the behavioral apparatus that can control and report such a sensory stimulus paradigm. This system enables the end user to control the behavioral testing in real-time and therefore it provides a strong custom-made platform for probing the neural basis of behavior.
Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning
2018-03-08
Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.
Generic Space Science Visualization in 2D/3D using SDDAS
NASA Astrophysics Data System (ADS)
Mukherjee, J.; Murphy, Z. B.; Gonzalez, C. A.; Muller, M.; Ybarra, S.
2017-12-01
The Southwest Data Display and Analysis System (SDDAS) is a flexible multi-mission / multi-instrument software system intended to support space physics data analysis, and has been in active development for over 20 years. For the Magnetospheric Multi-Scale (MMS), Juno, Cluster, and Mars Express missions, we have modified these generic tools for visualizing data in two and three dimensions. The SDDAS software is open source and makes use of various other open source packages, including VTK and Qwt. The software offers interactive plotting as well as a Python and Lua module to modify the data before plotting. In theory, by writing a Lua or Python module to read the data, any data could be used. Currently, the software can natively read data in IDFS, CEF, CDF, FITS, SEG-Y, ASCII, and XLS formats. We have integrated the software with other Python packages such as SPICE and SpacePy. Included with the visualization software is a database application and other utilities for managing data that can retrieve data from the Cluster Active Archive and Space Physics Data Facility at Goddard, as well as other local archives. Line plots, spectrograms, geographic, volume plots, strip charts, etc. are just some of the types of plots one can generate with SDDAS. Furthermore, due to the design, output is not limited to strictly visualization as SDDAS can also be used to generate stand-alone IDL or Python visualization code.. Lastly, SDDAS has been successfully used as a backend for several web based analysis systems as well.
eSBMTools 1.0: enhanced native structure-based modeling tools.
Lutz, Benjamin; Sinner, Claude; Heuermann, Geertje; Verma, Abhinav; Schug, Alexander
2013-11-01
Molecular dynamics simulations provide detailed insights into the structure and function of biomolecular systems. Thus, they complement experimental measurements by giving access to experimentally inaccessible regimes. Among the different molecular dynamics techniques, native structure-based models (SBMs) are based on energy landscape theory and the principle of minimal frustration. Typically used in protein and RNA folding simulations, they coarse-grain the biomolecular system and/or simplify the Hamiltonian resulting in modest computational requirements while achieving high agreement with experimental data. eSBMTools streamlines running and evaluating SBM in a comprehensive package and offers high flexibility in adding experimental- or bioinformatics-derived restraints. We present a software package that allows setting up, modifying and evaluating SBM for both RNA and proteins. The implemented workflows include predicting protein complexes based on bioinformatics-derived inter-protein contact information, a standardized setup of protein folding simulations based on the common PDB format, calculating reaction coordinates and evaluating the simulation by free-energy calculations with weighted histogram analysis method or by phi-values. The modules interface with the molecular dynamics simulation program GROMACS. The package is open source and written in architecture-independent Python2. http://sourceforge.net/projects/esbmtools/. alexander.schug@kit.edu. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Celicourt, P.; Piasecki, M.
2014-12-01
The high cost of hydro-meteorological data acquisition, communication and publication systems along with limited qualified human resources is considered as the main reason why hydro-meteorological data collection remains a challenge especially in developing countries. Despite significant advances in sensor network technologies which gave birth to open hardware and software, low-cost (less than $50) and low-power (in the order of a few miliWatts) sensor platforms in the last two decades, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome, and thus expensive task. These factors give rise for the need to develop a affordable, simple to deploy, scalable and self-organizing end-to-end (from sensor to publication) system suitable for deployment in such countries. The design of the envisioned system will consist of a few Sensed-And-Programmed Arduino-based sensor nodes with low-cost sensors measuring parameters relevant to hydrological processes and a Raspberry Pi micro-computer hosting the in-the-field back-end data management. This latter comprises the Python/Django model of the CUAHSI Observations Data Model (ODM) namely DjangODM backed by a PostgreSQL Database Server. We are also developing a Python-based data processing script which will be paired with the data autoloading capability of Django to populate the DjangODM database with the incoming data. To publish the data, the WOFpy (WaterOneFlow Web Services in Python) developed by the Texas Water Development Board for 'Water Data for Texas' which can produce WaterML web services from a variety of back-end database installations such as SQLite, MySQL, and PostgreSQL will be used. A step further would be the development of an appealing online visualization tool using Python statistics and analytics tools (Scipy, Numpy, Pandas) showing the spatial distribution of variables across an entire watershed as a time variant layer on top of a basemap.
Gala: A Python package for galactic dynamics
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.
2017-10-01
Gala is an Astropy-affiliated Python package for galactic dynamics. Python enables wrapping low-level languages (e.g., C) for speed without losing flexibility or ease-of-use in the user-interface. The API for Gala was designed to provide a class-based and user-friendly interface to fast (C or Cython-optimized) implementations of common operations such as gravitational potential and force evaluation, orbit integration, dynamical transformations, and chaos indicators for nonlinear dynamics. Gala also relies heavily on and interfaces well with the implementations of physical units and astronomical coordinate systems in the Astropy package (astropy.units and astropy.coordinates). Gala was designed to be used by both astronomical researchers and by students in courses on gravitational dynamics or astronomy. It has already been used in a number of scientific publications and has also been used in graduate courses on Galactic dynamics to, e.g., provide interactive visualizations of textbook material.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
Data management routines for reproducible research using the G-Node Python Client library
Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J.; Garbers, Christian; Rautenberg, Philipp L.; Wachtler, Thomas
2014-01-01
Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow. PMID:24634654
Data management routines for reproducible research using the G-Node Python Client library.
Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J; Garbers, Christian; Rautenberg, Philipp L; Wachtler, Thomas
2014-01-01
Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow.
GRASS GIS: The first Open Source Temporal GIS
NASA Astrophysics Data System (ADS)
Gebbert, Sören; Leppelt, Thomas
2015-04-01
GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management over temporal aggregation, temporal accumulation, spatio-temporal statistics, spatio-temporal sampling, temporal algebra, temporal topology analysis, time series animation and temporal topology visualization to time series import and export capabilities with support for NetCDF and VTK data formats. We will present several temporal modules that support parallel processing of raster and 3D raster time series. [1] GRASS GIS Open Source Approaches in Spatial Data Handling In Open Source Approaches in Spatial Data Handling, Vol. 2 (2008), pp. 171-199, doi:10.1007/978-3-540-74831-19 by M. Neteler, D. Beaudette, P. Cavallini, L. Lami, J. Cepicky edited by G. Brent Hall, Michael G. Leahy [2] Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environ. Model. Softw. 53, 1-12. [3] Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS Intl Journal of Geo-Information 2, 201-219. [4] Löwe, P., Klump, J., Thaler, J. (2012): The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster, (Geophysical Research Abstracts Vol. 14, EGU2012-4491, 2012), General Assembly European Geosciences Union (Vienna, Austria 2012). [5] Akhter, S., Aida, K., Chemin, Y., 2010. "GRASS GIS on High Performance Computing with MPI, OpenMP and Ninf-G Programming Framework". ISPRS Conference, Kyoto, 9-12 August 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
TESP combines existing domain simulators in the electric power grid, with new transactive agents, growth models and evaluation scripts. The existing domain simulators include GridLAB-D for the distribution grid and single-family residential buildings, MATPOWER for transmission and bulk generation, and EnergyPlus for large buildings. More are planned for subsequent versions of TESP. The new elements are: TEAgents - simulate market participants and transactive systems for market clearing. Some of this functionality was extracted from GridLAB-D and implemented in Python for customization by PNNL and others; Growth Model - a means for simulating system changes over a multiyear period, including bothmore » normal load growth and specific investment decisions. Customizable in Python code; and Evaluation Script - a means of evaluating different transactive systems through customizable post-processing in Python code. TESP provides a method for other researchers and vendors to design transactive systems, and test them in a virtual environment. It allows customization of the key components by modifying Python code.« less
FRED 2: an immunoinformatics framework for Python
Schubert, Benjamin; Walzer, Mathias; Brachvogel, Hans-Philipp; Szolek, András; Mohr, Christopher; Kohlbacher, Oliver
2016-01-01
Summary: Immunoinformatics approaches are widely used in a variety of applications from basic immunological to applied biomedical research. Complex data integration is inevitable in immunological research and usually requires comprehensive pipelines including multiple tools and data sources. Non-standard input and output formats of immunoinformatics tools make the development of such applications difficult. Here we present FRED 2, an open-source immunoinformatics framework offering easy and unified access to methods for epitope prediction and other immunoinformatics applications. FRED 2 is implemented in Python and designed to be extendable and flexible to allow rapid prototyping of complex applications. Availability and implementation: FRED 2 is available at http://fred-2.github.io Contact: schubert@informatik.uni-tuebingen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153717
FRED 2: an immunoinformatics framework for Python.
Schubert, Benjamin; Walzer, Mathias; Brachvogel, Hans-Philipp; Szolek, András; Mohr, Christopher; Kohlbacher, Oliver
2016-07-01
Immunoinformatics approaches are widely used in a variety of applications from basic immunological to applied biomedical research. Complex data integration is inevitable in immunological research and usually requires comprehensive pipelines including multiple tools and data sources. Non-standard input and output formats of immunoinformatics tools make the development of such applications difficult. Here we present FRED 2, an open-source immunoinformatics framework offering easy and unified access to methods for epitope prediction and other immunoinformatics applications. FRED 2 is implemented in Python and designed to be extendable and flexible to allow rapid prototyping of complex applications. FRED 2 is available at http://fred-2.github.io schubert@informatik.uni-tuebingen.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
PyGOLD: a python based API for docking based virtual screening workflow generation.
Patel, Hitesh; Brinkjost, Tobias; Koch, Oliver
2017-08-15
Molecular docking is one of the successful approaches in structure based discovery and development of bioactive molecules in chemical biology and medicinal chemistry. Due to the huge amount of computational time that is still required, docking is often the last step in a virtual screening approach. Such screenings are set as workflows spanned over many steps, each aiming at different filtering task. These workflows can be automatized in large parts using python based toolkits except for docking using the docking software GOLD. However, within an automated virtual screening workflow it is not feasible to use the GUI in between every step to change the GOLD configuration file. Thus, a python module called PyGOLD was developed, to parse, edit and write the GOLD configuration file and to automate docking based virtual screening workflows. The latest version of PyGOLD, its documentation and example scripts are available at: http://www.ccb.tu-dortmund.de/koch or http://www.agkoch.de. PyGOLD is implemented in Python and can be imported as a standard python module without any further dependencies. oliver.koch@agkoch.de, oliver.koch@tu-dortmund.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plimpton, Steve; Jones, Matt; Crozier, Paul
2006-01-01
Pizza.py is a loosely integrated collection of tools, many of which provide support for the LAMMPS molecular dynamics and ChemCell cell modeling packages. There are tools to create input files. convert between file formats, process log and dump files, create plots, and visualize and animate simulation snapshots. Software packages that are wrapped by Pizza.py. so they can invoked from within Python, include GnuPlot, MatLab, Raster3d. and RasMol. Pizza.py is written in Python and runs on any platform that supports Python. Pizza.py enhances the standard Python interpreter in a few simple ways. Its tools are Python modules which can be invokedmore » interactively, from scripts, or from GUIs when appropriate. Some of the tools require additional Python packages to be installed as part of the users Python. Others are wrappers on software packages (as listed above) which must be available on the users system. It is easy to modify or extend Pizza.py with new functionality or new tools, which need not have anything to do with LAMMPS or ChemCell.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William Eugene
These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly:more » the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.« less
Climate Model Diagnostic Analyzer Web Service System
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.
2014-12-01
We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same through web-browser interfaces. The summer school will serve as a valuable testbed for the tool development, preparing CMDA to serve its target community: Earth-science modeling and model-analysis community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veseli, S.
As the number of sites deploying and adopting EPICS Version 4 grows, so does the need to support PV Access from multiple languages. Especially important are the widely used scripting languages that tend to reduce both software development time and the learning curve for new users. In this paper we describe PvaPy, a Python API for the EPICS PV Access protocol and its accompanying structured data API. Rather than implementing the protocol itself in Python, PvaPy wraps the existing EPICS Version 4 C++ libraries using the Boost.Python framework. This approach allows us to benefit from the existing code base andmore » functionality, and to significantly reduce the Python API development effort. PvaPy objects are based on Python dictionaries and provide users with the ability to access even the most complex of PV Data structures in a relatively straightforward way. Its interfaces are easy to use, and include support for advanced EPICS Version 4 features such as implementation of client and server Remote Procedure Calls (RPC).« less
COBRApy: COnstraints-Based Reconstruction and Analysis for Python.
Ebrahim, Ali; Lerman, Joshua A; Palsson, Bernhard O; Hyduke, Daniel R
2013-08-08
COnstraint-Based Reconstruction and Analysis (COBRA) methods are widely used for genome-scale modeling of metabolic networks in both prokaryotes and eukaryotes. Due to the successes with metabolism, there is an increasing effort to apply COBRA methods to reconstruct and analyze integrated models of cellular processes. The COBRA Toolbox for MATLAB is a leading software package for genome-scale analysis of metabolism; however, it was not designed to elegantly capture the complexity inherent in integrated biological networks and lacks an integration framework for the multiomics data used in systems biology. The openCOBRA Project is a community effort to promote constraints-based research through the distribution of freely available software. Here, we describe COBRA for Python (COBRApy), a Python package that provides support for basic COBRA methods. COBRApy is designed in an object-oriented fashion that facilitates the representation of the complex biological processes of metabolism and gene expression. COBRApy does not require MATLAB to function; however, it includes an interface to the COBRA Toolbox for MATLAB to facilitate use of legacy codes. For improved performance, COBRApy includes parallel processing support for computationally intensive processes. COBRApy is an object-oriented framework designed to meet the computational challenges associated with the next generation of stoichiometric constraint-based models and high-density omics data sets. http://opencobra.sourceforge.net/
A Python-based interface to examine motions in time series of solar images
NASA Astrophysics Data System (ADS)
Campos-Rozo, J. I.; Vargas Domínguez, S.
2017-10-01
Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.
GenomeDiagram: a python package for the visualization of large-scale genomic data.
Pritchard, Leighton; White, Jennifer A; Birch, Paul R J; Toth, Ian K
2006-03-01
We present GenomeDiagram, a flexible, open-source Python module for the visualization of large-scale genomic, comparative genomic and other data with reference to a single chromosome or other biological sequence. GenomeDiagram may be used to generate publication-quality vector graphics, rastered images and in-line streamed graphics for webpages. The package integrates with datatypes from the BioPython project, and is available for Windows, Linux and Mac OS X systems. GenomeDiagram is freely available as source code (under GNU Public License) at http://bioinf.scri.ac.uk/lp/programs.html, and requires Python 2.3 or higher, and recent versions of the ReportLab and BioPython packages. A user manual, example code and images are available at http://bioinf.scri.ac.uk/lp/programs.html.
A Reward-Based Behavioral Platform to Measure Neural Activity during Head-Fixed Behavior
Micallef, Andrew H.; Takahashi, Naoya; Larkum, Matthew E.; Palmer, Lucy M.
2017-01-01
Understanding the neural computations that contribute to behavior requires recording from neurons while an animal is behaving. This is not an easy task as most subcellular recording techniques require absolute head stability. The Go/No-Go sensory task is a powerful decision-driven task that enables an animal to report a binary decision during head-fixation. Here we discuss how to set up an Ardunio and Python based platform system to control a Go/No-Go sensory behavior paradigm. Using an Arduino micro-controller and Python-based custom written program, a reward can be delivered to the animal depending on the decision reported. We discuss the various components required to build the behavioral apparatus that can control and report such a sensory stimulus paradigm. This system enables the end user to control the behavioral testing in real-time and therefore it provides a strong custom-made platform for probing the neural basis of behavior. PMID:28620282
Optimal Repair And Replacement Policy For A System With Multiple Components
2016-06-17
Numerical Demonstration To implement the linear program, we use the Python Programming Language (PSF 2016) with the Pyomo optimization modeling language...opre.1040.0133. Hart, W.E., C. Laird, J. Watson, D.L. Woodruff. 2012. Pyomo–optimization modeling in python , vol. 67. Springer Science & Business...Media. Hart, W.E., J. Watson, D.L. Woodruff. 2011. Pyomo: modeling and solving mathematical programs in python . Mathematical Programming Computation 3(3
Methods to Secure Databases Against Vulnerabilities
2015-12-01
for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query
Gautier, Laurent
2010-12-21
Computer languages can be domain-related, and in the case of multidisciplinary projects, knowledge of several languages will be needed in order to quickly implements ideas. Moreover, each computer language has relative strong points, making some languages better suited than others for a given task to be implemented. The Bioconductor project, based on the R language, has become a reference for the numerical processing and statistical analysis of data coming from high-throughput biological assays, providing a rich selection of methods and algorithms to the research community. At the same time, Python has matured as a rich and reliable language for the agile development of prototypes or final implementations, as well as for handling large data sets. The data structures and functions from Bioconductor can be exposed to Python as a regular library. This allows a fully transparent and native use of Bioconductor from Python, without one having to know the R language and with only a small community of translators required to know both. To demonstrate this, we have implemented such Python representations for key infrastructure packages in Bioconductor, letting a Python programmer handle annotation data, microarray data, and next-generation sequencing data. Bioconductor is now not solely reserved to R users. Building a Python application using Bioconductor functionality can be done just like if Bioconductor was a Python package. Moreover, similar principles can be applied to other languages and libraries. Our Python package is available at: http://pypi.python.org/pypi/rpy2-bioconductor-extensions/.
Reyes-Velasco, Jacobo; Card, Daren C; Andrew, Audra L; Shaney, Kyle J; Adams, Richard H; Schield, Drew R; Casewell, Nicholas R; Mackessy, Stephen P; Castoe, Todd A
2015-01-01
Snake venom gene evolution has been studied intensively over the past several decades, yet most previous studies have lacked the context of complete snake genomes and the full context of gene expression across diverse snake tissues. We took a novel approach to studying snake venom evolution by leveraging the complete genome of the Burmese python, including information from tissue-specific patterns of gene expression. We identified the orthologs of snake venom genes in the python genome, and conducted detailed analysis of gene expression of these venom homologs to identify patterns that differ between snake venom gene families and all other genes. We found that venom gene homologs in the python are expressed in many different tissues outside of oral glands, which illustrates the pitfalls of using transcriptomic data alone to define "venom toxins." We hypothesize that the python may represent an ancestral state prior to major venom development, which is supported by our finding that the expansion of venom gene families is largely restricted to highly venomous caenophidian snakes. Therefore, the python provides insight into biases in which genes were recruited for snake venom systems. Python venom homologs are generally expressed at lower levels, have higher variance among tissues, and are expressed in fewer organs compared with all other python genes. We propose a model for the evolution of snake venoms in which venom genes are recruited preferentially from genes with particular expression profile characteristics, which facilitate a nearly neutral transition toward specialized venom system expression. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
π Scope: python based scientific workbench with visualization tool for MDSplus data
NASA Astrophysics Data System (ADS)
Shiraiwa, S.
2014-10-01
π Scope is a python based scientific data analysis and visualization tool constructed on wxPython and Matplotlib. Although it is designed to be a generic tool, the primary motivation for developing the new software is 1) to provide an updated tool to browse MDSplus data, with functionalities beyond dwscope and jScope, and 2) to provide a universal foundation to construct interface tools to perform computer simulation and modeling for Alcator C-Mod. It provides many features to visualize MDSplus data during tokamak experiments including overplotting different signals and discharges, various plot types (line, contour, image, etc.), in-panel data analysis using python scripts, and publication quality graphics generation. Additionally, the logic to produce multi-panel plots is designed to be backward compatible with dwscope, enabling smooth migration for dwscope users. πScope uses multi-threading to reduce data transfer latency, and its object-oriented design makes it easy to modify and expand while the open source nature allows portability. A built-in tree data browser allows a user to approach the data structure both from a GUI and a script, enabling relatively complex data analysis workflow to be built quickly. As an example, an IDL-based interface to perform GENRAY/CQL3D simulations was ported on πScope, thus allowing LHCD simulation to be run between-shot using C-Mod experimental profiles. This workflow is being used to generate a large database to develop a LHCD actuator model for the plasma control system. Supported by USDoE Award DE-FC02-99ER54512.
Python-based geometry preparation and simulation visualization toolkits for STEPS
Chen, Weiliang; De Schutter, Erik
2014-01-01
STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754
Multidisciplinary Conceptual Design for Reduced-Emission Rotorcraft
NASA Technical Reports Server (NTRS)
Silva, Christopher; Johnson, Wayne; Solis, Eduardo
2018-01-01
Python-based wrappers for OpenMDAO are used to integrate disparate software for practical conceptual design of rotorcraft. The suite of tools which are connected thus far include aircraft sizing, comprehensive analysis, and parametric geometry. The tools are exercised to design aircraft with aggressive goals for emission reductions relative to fielded state-of-the-art rotorcraft. Several advanced reduced-emission rotorcraft are designed and analyzed, demonstrating the flexibility of the tools to consider a wide variety of potentially transformative vertical flight vehicles. To explore scale effects, aircraft have been sized for 5, 24, or 76 passengers in their design missions. Aircraft types evaluated include tiltrotor, single-main-rotor, coaxial, and side-by-side helicopters. Energy and drive systems modeled include Lithium-ion battery, hydrogen fuel cell, turboelectric hybrid, and turboshaft drive systems. Observations include the complex nature of the trade space for this simple problem, with many potential aircraft design and operational solutions for achieving significant emission reductions. Also interesting is that achieving greatly reduced emissions may not require exotic component technologies, but may be achieved with a dedicated design objective of reducing emissions.
Wang, Anliang; Yan, Xiaolong; Wei, Zhijun
2018-04-27
This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.
Astronomical Simulations Using Visual Python
NASA Astrophysics Data System (ADS)
Cobb, Michael L.
2007-05-01
The Physics and Engineering Physics Department at Southeast Missouri State University has adopted the “Matter and Interactions I Modern Mechanics” text by Chabay and Sherwood for our calculus based introductory physics course. We have fully integrated the use of modeling and simulations by using the Visual Python language also know as VPython. This powerful, high level, object orientated language with full three dimensional, stereo graphics has stimulated both my students and myself to find wider applications for our new found skills. We have successfully modeled gravitational resonances in planetary rings, galaxy collisions, and planetary orbits around binary star systems. This talk will provide a quick overview of VPython and demonstrate the various simulations.
2010-01-01
Background Computer languages can be domain-related, and in the case of multidisciplinary projects, knowledge of several languages will be needed in order to quickly implements ideas. Moreover, each computer language has relative strong points, making some languages better suited than others for a given task to be implemented. The Bioconductor project, based on the R language, has become a reference for the numerical processing and statistical analysis of data coming from high-throughput biological assays, providing a rich selection of methods and algorithms to the research community. At the same time, Python has matured as a rich and reliable language for the agile development of prototypes or final implementations, as well as for handling large data sets. Results The data structures and functions from Bioconductor can be exposed to Python as a regular library. This allows a fully transparent and native use of Bioconductor from Python, without one having to know the R language and with only a small community of translators required to know both. To demonstrate this, we have implemented such Python representations for key infrastructure packages in Bioconductor, letting a Python programmer handle annotation data, microarray data, and next-generation sequencing data. Conclusions Bioconductor is now not solely reserved to R users. Building a Python application using Bioconductor functionality can be done just like if Bioconductor was a Python package. Moreover, similar principles can be applied to other languages and libraries. Our Python package is available at: http://pypi.python.org/pypi/rpy2-bioconductor-extensions/ PMID:21210978
SPOTting Model Parameters Using a Ready-Made Python Package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2017-04-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package.
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783
Food consumption increases cell proliferation in the python brain.
Habroun, Stacy S; Schaffner, Andrew A; Taylor, Emily N; Strand, Christine R
2018-04-06
Pythons are model organisms for investigating physiological responses to food intake. While systemic growth in response to food consumption is well documented, what occurs in the brain is currently unexplored. In this study, male ball pythons ( Python regius ) were used to test the hypothesis that food consumption stimulates cell proliferation in the brain. We used 5-bromo-12'-deoxyuridine (BrdU) as a cell-birth marker to quantify and compare cell proliferation in the brain of fasted snakes and those at 2 and 6 days after a meal. Throughout the telencephalon, cell proliferation was significantly increased in the 6 day group, with no difference between the 2 day group and controls. Systemic postprandial plasticity occurs quickly after a meal is ingested, during the period of active digestion; however, the brain displays a surge of cell proliferation after most digestion and absorption is complete. © 2018. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Grillo, Vince
2017-01-01
The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a maximax approach.
NASA Technical Reports Server (NTRS)
Grillo, Vince
2016-01-01
The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a Maximax approach.
Scripting MODFLOW model development using Python and FloPy
Bakker, Mark; Post, Vincent E. A.; Langevin, Christian D.; Hughes, Joseph D.; White, Jeremy; Starn, Jeffrey; Fienen, Michael N.
2016-01-01
Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy.
Coordinates and intervals in graph-based reference genomes.
Rand, Knut D; Grytten, Ivar; Nederbragt, Alexander J; Storvik, Geir O; Glad, Ingrid K; Sandve, Geir K
2017-05-18
It has been proposed that future reference genomes should be graph structures in order to better represent the sequence diversity present in a species. However, there is currently no standard method to represent genomic intervals, such as the positions of genes or transcription factor binding sites, on graph-based reference genomes. We formalize offset-based coordinate systems on graph-based reference genomes and introduce methods for representing intervals on these reference structures. We show the advantage of our methods by representing genes on a graph-based representation of the newest assembly of the human genome (GRCh38) and its alternative loci for regions that are highly variable. More complex reference genomes, containing alternative loci, require methods to represent genomic data on these structures. Our proposed notation for genomic intervals makes it possible to fully utilize the alternative loci of the GRCh38 assembly and potential future graph-based reference genomes. We have made a Python package for representing such intervals on offset-based coordinate systems, available at https://github.com/uio-cels/offsetbasedgraph . An interactive web-tool using this Python package to visualize genes on a graph created from GRCh38 is available at https://github.com/uio-cels/genomicgraphcoords .
ModFossa: A library for modeling ion channels using Python.
Ferneyhough, Gareth B; Thibealut, Corey M; Dascalu, Sergiu M; Harris, Frederick C
2016-06-01
The creation and simulation of ion channel models using continuous-time Markov processes is a powerful and well-used tool in the field of electrophysiology and ion channel research. While several software packages exist for the purpose of ion channel modeling, most are GUI based, and none are available as a Python library. In an attempt to provide an easy-to-use, yet powerful Markov model-based ion channel simulator, we have developed ModFossa, a Python library supporting easy model creation and stimulus definition, complete with a fast numerical solver, and attractive vector graphics plotting.
Amebiasis in four ball pythons, Python reginus.
Kojimoto, A; Uchida, K; Horii, Y; Okumura, S; Yamaguch, R; Tateyama, S
2001-12-01
Between September 13th and November 18th in 1999, four ball pythons, Python reginus kept in the same display, showed anorexia and died one after another. At necropsy, all four snakes had severe hemorrhagic colitis. Microscopically, all snakes had severe necrotizing hemorrhagic colitis, in association with ameba-like protozoa. Some of the protozoa had macrophage-like morphology and others formed protozoal cysts with thickened walls. These protozoa were distributed throughout the wall in the large intestine. Based on the pathological findings, these snakes were infested with a member of Entamoeba sp., presumably with infection by Entamoeba invadens, the most prevalent type of reptilian amoebae.
Snowflake: A Lightweight Portable Stencil DSL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Nathan; Driscoll, Michael; Markley, Charles
Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less
Snowflake: A Lightweight Portable Stencil DSL
Zhang, Nathan; Driscoll, Michael; Markley, Charles; ...
2017-05-01
Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less
C3I and Modelling and Simulation (M&S) Interoperability
2004-03-01
customised Open Source products. The technical implementation is based on the use of the eXtendend Markup Language (XML) and Python . XML is developed...to structure, store and send information. The language is focus on the description of data. Python is a portable, interpreted, object-oriented...programming language. A huge variety of usable Open Source Projects were issued by the Python Community. 3.1 Phase 1: Feasibility Studies Phase 1 was
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... Python Species, and Four Anaconda Species as Injurious Reptiles AGENCY: Fish and Wildlife Service... regulations to add Indian python (Python molurus, including Burmese python Python molurus bivittatus), reticulated python (Broghammerus reticulatus or Python reticulatus), Northern African python (Python sebae...
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J
2010-03-01
PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.
2010-01-01
Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306
A general spectral method for the numerical simulation of one-dimensional interacting fermions
NASA Astrophysics Data System (ADS)
Clason, Christian; von Winckel, Gregory
2012-08-01
This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave function. The assembly of these matrices is performed efficiently by exploiting the combinatorial structure of the sparsity patterns. Reasons for new version: A Python implementation is now included. Summary of revisions: Added a Python implementation; small documentation fixes in Matlab implementation. No change in features of the package. Restrictions: Only one-dimensional computational domains with homogeneous Dirichlet or periodic boundary conditions are supported. Running time: Seconds to minutes.
Software Tools for Development on the Peregrine System | High-Performance
Computing | NREL Software Tools for Development on the Peregrine System Software Tools for and manage software at the source code level. Cross-Platform Make and SCons The "Cross-Platform Make" (CMake) package is from Kitware, and SCons is a modern software build tool based on Python
COMP Superscalar, an interoperable programming framework
NASA Astrophysics Data System (ADS)
Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul
2015-12-01
COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.
2014-09-01
get install python2.7 python- openssl python-gevent libevent-dev python2.7-dev build-essential make liblapack-dev libmysqlclient-dev python-chardet...apt-get install python-dev openssl python- openssl python-pyasn1 python-twisted • apt-get install subversion • apt-get install authbind 4
Abby, Sophie S.; Néron, Bertrand; Ménager, Hervé; Touchon, Marie; Rocha, Eduardo P. C.
2014-01-01
Motivation Biologists often wish to use their knowledge on a few experimental models of a given molecular system to identify homologs in genomic data. We developed a generic tool for this purpose. Results Macromolecular System Finder (MacSyFinder) provides a flexible framework to model the properties of molecular systems (cellular machinery or pathway) including their components, evolutionary associations with other systems and genetic architecture. Modelled features also include functional analogs, and the multiple uses of a same component by different systems. Models are used to search for molecular systems in complete genomes or in unstructured data like metagenomes. The components of the systems are searched by sequence similarity using Hidden Markov model (HMM) protein profiles. The assignment of hits to a given system is decided based on compliance with the content and organization of the system model. A graphical interface, MacSyView, facilitates the analysis of the results by showing overviews of component content and genomic context. To exemplify the use of MacSyFinder we built models to detect and class CRISPR-Cas systems following a previously established classification. We show that MacSyFinder allows to easily define an accurate “Cas-finder” using publicly available protein profiles. Availability and Implementation MacSyFinder is a standalone application implemented in Python. It requires Python 2.7, Hmmer and makeblastdb (version 2.2.28 or higher). It is freely available with its source code under a GPLv3 license at https://github.com/gem-pasteur/macsyfinder. It is compatible with all platforms supporting Python and Hmmer/makeblastdb. The “Cas-finder” (models and HMM profiles) is distributed as a compressed tarball archive as Supporting Information. PMID:25330359
Using Python on the Peregrine System | High-Performance Computing | NREL
was not designed for use in a shared computing environment. The following example creates a new Python is run. For example an environment.yml file can be created on the developer's laptop and used on the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccarthy, Jonathan K.
2016-07-28
PyGeoTess is a Python interface module to the GeoTess gridding and earth model library from Sandia National Laboratories. It provides simplified access to a subset of the GeoTess C++ library, and takes advantage of Python's interactive interpreter and inline documentation system.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Uccellini, Lorenzo; Ossiboff, Robert J; de Matos, Ricardo E C; Morrisey, James K; Petrosov, Alexandra; Navarrete-Macias, Isamara; Jain, Komal; Hicks, Allison L; Buckles, Elizabeth L; Tokarz, Rafal; McAloose, Denise; Lipkin, Walter Ian
2014-08-08
Respiratory infections are important causes of morbidity and mortality in reptiles; however, the causative agents are only infrequently identified. Pneumonia, tracheitis and esophagitis were reported in a collection of ball pythons (Python regius). Eight of 12 snakes had evidence of bacterial pneumonia. High-throughput sequencing of total extracted nucleic acids from lung, esophagus and spleen revealed a novel nidovirus. PCR indicated the presence of viral RNA in lung, trachea, esophagus, liver, and spleen. In situ hybridization confirmed the presence of intracellular, intracytoplasmic viral nucleic acids in the lungs of infected snakes. Phylogenetic analysis based on a 1,136 amino acid segment of the polyprotein suggests that this virus may represent a new species in the subfamily Torovirinae. This report of a novel nidovirus in ball pythons may provide insight into the pathogenesis of respiratory disease in this species and enhances our knowledge of the diversity of nidoviruses.
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python
Wils, Stefan; Schutter, Erik De
2008-01-01
We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245
Scripting MODFLOW Model Development Using Python and FloPy.
Bakker, M; Post, V; Langevin, C D; Hughes, J D; White, J T; Starn, J J; Fienen, M N
2016-09-01
Graphical user interfaces (GUIs) are commonly used to construct and postprocess numerical groundwater flow and transport models. Scripting model development with the programming language Python is presented here as an alternative approach. One advantage of Python is that there are many packages available to facilitate the model development process, including packages for plotting, array manipulation, optimization, and data analysis. For MODFLOW-based models, the FloPy package was developed by the authors to construct model input files, run the model, and read and plot simulation results. Use of Python with the available scientific packages and FloPy facilitates data exploration, alternative model evaluations, and model analyses that can be difficult to perform with GUIs. Furthermore, Python scripts are a complete, transparent, and repeatable record of the modeling process. The approach is introduced with a simple FloPy example to create and postprocess a MODFLOW model. A more complicated capture-fraction analysis with a real-world model is presented to demonstrate the types of analyses that can be performed using Python and FloPy. © 2016, National Ground Water Association.
pyro: Python-based tutorial for computational methods for hydrodynamics
NASA Astrophysics Data System (ADS)
Zingale, Michael
2015-07-01
pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.
Naval Observatory Vector Astrometry Software (NOVAS) Version 3.1, Introducing a Python Edition
NASA Astrophysics Data System (ADS)
Barron, Eric G.; Kaplan, G. H.; Bangert, J.; Bartlett, J. L.; Puatua, W.; Harris, W.; Barrett, P.
2011-01-01
The Naval Observatory Vector Astrometry Software (NOVAS) is a source-code library that provides common astrometric quantities and transformations. NOVAS calculations are accurate at the sub-milliarcsecond level. The library can supply, in one or two subroutine or function calls, the instantaneous celestial position of any star or planet in a variety of coordinate systems. NOVAS also provides access to all of the building blocks that go into such computations. NOVAS Version 3.1 introduces a Python edition alongside the Fortran and C editions. The Python edition uses the computational code from the C edition and, currently, mimics the function calls of the C edition. Future versions will expand the functionality of the Python edition to harness the object-oriented nature of the Python language, and will implement the ability to handle large quantities of objects or observers using the array functionality in NumPy (a third-party scientific package for Python). NOVAS 3.1 also adds a module to transform GCRS vectors to the ITRS; the ITRS to GCRS transformation was already provided in NOVAS 3.0. The module that corrects an ITRS vector for polar motion has been modified to undo that correction upon demand. In the C edition, the ephemeris-access functions have been revised for use on 64-bit systems and for improved performance in general. NOVAS, including documentation, is available from the USNO website (http://www.usno.navy.mil/USNO/astronomical-applications/software-products/novas).
Parallel selective pressures drive convergent diversification of phenotypes in pythons and boas.
Esquerré, Damien; Scott Keogh, J
2016-07-01
Pythons and boas are globally distributed and distantly related radiations with remarkable phenotypic and ecological diversity. We tested whether pythons, boas and their relatives have evolved convergent phenotypes when they display similar ecology. We collected geometric morphometric data on head shape for 1073 specimens representing over 80% of species. We show that these two groups display strong and widespread convergence when they occupy equivalent ecological niches and that the history of phenotypic evolution strongly matches the history of ecological diversification, suggesting that both processes are strongly coupled. These results are consistent with replicated adaptive radiation in both groups. We argue that strong selective pressures related to habitat-use have driven this convergence. Pythons and boas provide a new model system for the study of macro-evolutionary patterns of morphological and ecological evolution and they do so at a deeper level of divergence and global scale than any well-established adaptive radiation model systems. © 2016 John Wiley & Sons Ltd/CNRS.
A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.
Smelter, Andrey; Moseley, Hunter N B
2018-01-01
The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.
Rutllant, Josep
2016-01-01
Comparative genomics approaches provide a means of leveraging functional genomics information from a highly annotated model organism's genome (such as the mouse genome) in order to make physiological inferences about the role of genes and proteins in a less characterized organism's genome (such as the Burmese python). We employed a comparative genomics approach to produce the functional annotation of Python bivittatus genes encoding proteins associated with sperm phenotypes. We identify 129 gene-phenotype relationships in the python which are implicated in 10 specific sperm phenotypes. Results obtained through our systematic analysis identified subsets of python genes exhibiting associations with gene ontology annotation terms. Functional annotation data was represented in a semantic scatter plot. Together, these newly annotated Python bivittatus genome resources provide a high resolution framework from which the biology relating to reptile spermatogenesis, fertility, and reproduction can be further investigated. Applications of our research include (1) production of genetic diagnostics for assessing fertility in domestic and wild reptiles; (2) enhanced assisted reproduction technology for endangered and captive reptiles; and (3) novel molecular targets for biotechnology-based approaches aimed at reducing fertility and reproduction of invasive reptiles. Additional enhancements to reptile genomic resources will further enhance their value. PMID:27200191
2016-01-01
Color variation provides the opportunity to investigate the genetic basis of evolution and selection. Reptiles are less studied than mammals. Comparative genomics approaches allow for knowledge gained in one species to be leveraged for use in another species. We describe a comparative vertebrate analysis of conserved regulatory modules in pythons aimed at assessing bioinformatics evidence that transcription factors important in mammalian pigmentation phenotypes may also be important in python pigmentation phenotypes. We identified 23 python orthologs of mammalian genes associated with variation in coat color phenotypes for which we assessed the extent of pairwise protein sequence identity between pythons and mouse, dog, horse, cow, chicken, anole lizard, and garter snake. We next identified a set of melanocyte/pigment associated transcription factors (CREB, FOXD3, LEF-1, MITF, POU3F2, and USF-1) that exhibit relatively conserved sequence similarity within their DNA binding regions across species based on orthologous alignments across multiple species. Finally, we identified 27 evolutionarily conserved clusters of transcription factor binding sites within ~200-nucleotide intervals of the 1500-nucleotide upstream regions of AIM1, DCT, MC1R, MITF, MLANA, OA1, PMEL, RAB27A, and TYR from Python bivittatus. Our results provide insight into pigment phenotypes in pythons. PMID:27698666
Irizarry, Kristopher J L; Bryden, Randall L
2016-01-01
Color variation provides the opportunity to investigate the genetic basis of evolution and selection. Reptiles are less studied than mammals. Comparative genomics approaches allow for knowledge gained in one species to be leveraged for use in another species. We describe a comparative vertebrate analysis of conserved regulatory modules in pythons aimed at assessing bioinformatics evidence that transcription factors important in mammalian pigmentation phenotypes may also be important in python pigmentation phenotypes. We identified 23 python orthologs of mammalian genes associated with variation in coat color phenotypes for which we assessed the extent of pairwise protein sequence identity between pythons and mouse, dog, horse, cow, chicken, anole lizard, and garter snake. We next identified a set of melanocyte/pigment associated transcription factors (CREB, FOXD3, LEF-1, MITF, POU3F2, and USF-1) that exhibit relatively conserved sequence similarity within their DNA binding regions across species based on orthologous alignments across multiple species. Finally, we identified 27 evolutionarily conserved clusters of transcription factor binding sites within ~200-nucleotide intervals of the 1500-nucleotide upstream regions of AIM1, DCT, MC1R, MITF, MLANA, OA1, PMEL, RAB27A, and TYR from Python bivittatus . Our results provide insight into pigment phenotypes in pythons.
Irizarry, Kristopher J L; Rutllant, Josep
2016-01-01
Comparative genomics approaches provide a means of leveraging functional genomics information from a highly annotated model organism's genome (such as the mouse genome) in order to make physiological inferences about the role of genes and proteins in a less characterized organism's genome (such as the Burmese python). We employed a comparative genomics approach to produce the functional annotation of Python bivittatus genes encoding proteins associated with sperm phenotypes. We identify 129 gene-phenotype relationships in the python which are implicated in 10 specific sperm phenotypes. Results obtained through our systematic analysis identified subsets of python genes exhibiting associations with gene ontology annotation terms. Functional annotation data was represented in a semantic scatter plot. Together, these newly annotated Python bivittatus genome resources provide a high resolution framework from which the biology relating to reptile spermatogenesis, fertility, and reproduction can be further investigated. Applications of our research include (1) production of genetic diagnostics for assessing fertility in domestic and wild reptiles; (2) enhanced assisted reproduction technology for endangered and captive reptiles; and (3) novel molecular targets for biotechnology-based approaches aimed at reducing fertility and reproduction of invasive reptiles. Additional enhancements to reptile genomic resources will further enhance their value.
NASA Astrophysics Data System (ADS)
Steinberg, P. D.; Bednar, J. A.; Rudiger, P.; Stevens, J. L. R.; Ball, C. E.; Christensen, S. D.; Pothina, D.
2017-12-01
The rich variety of software libraries available in the Python scientific ecosystem provides a flexible and powerful alternative to traditional integrated GIS (geographic information system) programs. Each such library focuses on doing a certain set of general-purpose tasks well, and Python makes it relatively simple to glue the libraries together to solve a wide range of complex, open-ended problems in Earth science. However, choosing an appropriate set of libraries can be challenging, and it is difficult to predict how much "glue code" will be needed for any particular combination of libraries and tasks. Here we present a set of libraries that have been designed to work well together to build interactive analyses and visualizations of large geographic datasets, in standard web browsers. The resulting workflows run on ordinary laptops even for billions of data points, and easily scale up to larger compute clusters when available. The declarative top-level interface used in these libraries means that even complex, fully interactive applications can be built and deployed as web services using only a few dozen lines of code, making it simple to create and share custom interactive applications even for datasets too large for most traditional GIS systems. The libraries we will cover include GeoViews (HoloViews extended for geographic applications) for declaring visualizable/plottable objects, Bokeh for building visual web applications from GeoViews objects, Datashader for rendering arbitrarily large datasets faithfully as fixed-size images, Param for specifying user-modifiable parameters that model your domain, Xarray for computing with n-dimensional array data, Dask for flexibly dispatching computational tasks across processors, and Numba for compiling array-based Python code down to fast machine code. We will show how to use the resulting workflow with static datasets and with simulators such as GSSHA or AdH, allowing you to deploy flexible, high-performance web-based dashboards for your GIS data or simulations without needing major investments in code development or maintenance.
Creating CAD designs and performing their subsequent analysis using opensource solutions in Python
NASA Astrophysics Data System (ADS)
Iakushkin, Oleg O.; Sedova, Olga S.
2018-01-01
The paper discusses the concept of a system that encapsulates the transition from geometry building to strength tests. The solution we propose views the engineer as a programmer who is capable of coding the procedure for working with the modeli.e., to outline the necessary transformations and create cases for boundary conditions. We propose a prototype of such system. In our work, we used: Python programming language to create the program; Jupyter framework to create a single workspace visualization; pythonOCC library to implement CAD; FeniCS library to implement FEM; GMSH and VTK utilities. The prototype is launched on a platform which is a dynamically expandable multi-tenant cloud service providing users with all computing resources on demand. However, the system may be deployed locally for prototyping or work that does not involve resource-intensive computing. To make it possible, we used containerization, isolating the system in a Docker container.
Graham Reynolds, R; Niemiller, Matthew L; Revell, Liam J
2014-02-01
Snakes in the families Boidae and Pythonidae constitute some of the most spectacular reptiles and comprise an enormous diversity of morphology, behavior, and ecology. While many species of boas and pythons are familiar, taxonomy and evolutionary relationships within these families remain contentious and fluid. A major effort in evolutionary and conservation biology is to assemble a comprehensive Tree-of-Life, or a macro-scale phylogenetic hypothesis, for all known life on Earth. No previously published study has produced a species-level molecular phylogeny for more than 61% of boa species or 65% of python species. Using both novel and previously published sequence data, we have produced a species-level phylogeny for 84.5% of boid species and 82.5% of pythonid species, contextualized within a larger phylogeny of henophidian snakes. We obtained new sequence data for three boid, one pythonid, and two tropidophiid taxa which have never previously been included in a molecular study, in addition to generating novel sequences for seven genes across an additional 12 taxa. We compiled an 11-gene dataset for 127 taxa, consisting of the mitochondrial genes CYTB, 12S, and 16S, and the nuclear genes bdnf, bmp2, c-mos, gpr35, rag1, ntf3, odc, and slc30a1, totaling up to 7561 base pairs per taxon. We analyzed this dataset using both maximum likelihood and Bayesian inference and recovered a well-supported phylogeny for these species. We found significant evidence of discordance between taxonomy and evolutionary relationships in the genera Tropidophis, Morelia, Liasis, and Leiopython, and we found support for elevating two previously suggested boid species. We suggest a revised taxonomy for the boas (13 genera, 58 species) and pythons (8 genera, 40 species), review relationships between our study and the many other molecular phylogenetic studies of henophidian snakes, and present a taxonomic database and alignment which may be easily used and built upon by other researchers. Copyright © 2013 Elsevier Inc. All rights reserved.
Tactical EO/IE System for Ground Forces
1990-09-01
this thesis is based on these ideas. We will give the guidelines to a project manager or to a beginner in EW research about EO/IR system acquisition...1982 Syria vs Israel Syria: 1 Syria 5+ By Syria: 89 by 96+ Sparrow AIM-9G/L PYTHON Korea: 1 B- Kamchatka 1983 Korea vs USSR 0 0 747(KAL-(07) by AA-3 or
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2014-05-01
To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.
ExoData: A Python package to handle large exoplanet catalogue data
NASA Astrophysics Data System (ADS)
Varley, Ryan
2016-10-01
Exoplanet science often involves using the system parameters of real exoplanets for tasks such as simulations, fitting routines, and target selection for proposals. Several exoplanet catalogues are already well established but often lack a version history and code friendly interfaces. Software that bridges the barrier between the catalogues and code enables users to improve the specific repeatability of results by facilitating the retrieval of exact system parameters used in articles results along with unifying the equations and software used. As exoplanet science moves towards large data, gone are the days where researchers can recall the current population from memory. An interface able to query the population now becomes invaluable for target selection and population analysis. ExoData is a Python interface and exploratory analysis tool for the Open Exoplanet Catalogue. It allows the loading of exoplanet systems into Python as objects (Planet, Star, Binary, etc.) from which common orbital and system equations can be calculated and measured parameters retrieved. This allows researchers to use tested code of the common equations they require (with units) and provides a large science input catalogue of planets for easy plotting and use in research. Advanced querying of targets is possible using the database and Python programming language. ExoData is also able to parse spectral types and fill in missing parameters according to programmable specifications and equations. Examples of use cases are integration of equations into data reduction pipelines, selecting planets for observing proposals and as an input catalogue to large scale simulation and analysis of planets. ExoData is a Python package available freely on GitHub.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, D.; Dubrovin, M.; Gaponenko, I.
Psana(Photon Science Analysis) is a software package that is used to analyze data produced by the Linac Coherent Light Source X-ray free-electron laser at the SLAC National Accelerator Laboratory. The project began in 2011, is written primarily in C++ with some Python, and provides user interfaces in both C++ and Python. Most users use the Python interface. The same code can be run in real time while data are being taken as well as offline, executing on many nodes/cores using MPI for parallelization. It is publicly available and installable on the RHEL5/6/7 operating systems.
A Python Analytical Pipeline to Identify Prohormone Precursors and Predict Prohormone Cleavage Sites
Southey, Bruce R.; Sweedler, Jonathan V.; Rodriguez-Zas, Sandra L.
2008-01-01
Neuropeptides and hormones are signaling molecules that support cell–cell communication in the central nervous system. Experimentally characterizing neuropeptides requires significant efforts because of the complex and variable processing of prohormone precursor proteins into neuropeptides and hormones. We demonstrate the power and flexibility of the Python language to develop components of an bioinformatic analytical pipeline to identify precursors from genomic data and to predict cleavage as these precursors are en route to the final bioactive peptides. We identified 75 precursors in the rhesus genome, predicted cleavage sites using support vector machines and compared the rhesus predictions to putative assignments based on homology to human sequences. The correct classification rate of cleavage using the support vector machines was over 97% for both human and rhesus data sets. The functionality of Python has been important to develop and maintain NeuroPred (http://neuroproteomics.scs.uiuc.edu/neuropred.html), a user-centered web application for the neuroscience community that provides cleavage site prediction from a wide range of models, precision and accuracy statistics, post-translational modifications, and the molecular mass of potential peptides. The combined results illustrate the suitability of the Python language to implement an all-inclusive bioinformatics approach to predict neuropeptides that encompasses a large number of interdependent steps, from scanning genomes for precursor genes to identification of potential bioactive neuropeptides. PMID:19169350
Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python
NASA Astrophysics Data System (ADS)
Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor
2017-04-01
As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via ODBC or JDBC respectively, 2) an instance to represent the spatial data stored in the database as a dataframe in Python, called the IdaGeoDataFrame, with a specific geometry attribute which recognises a planar geometry column in dashDB and 3) Python wrappers for spatial functions like within, distance, area, buffer} and more which dashDB currently supports to make the querying process from Python much simpler for the users. The spatial functions translate well-known geopandas-like syntax into SQL queries utilising the database connection to perform spatial operations in-database and can operate on single geometries as well two different geometries from different IdaGeoDataFrames. The in-database queries strictly follow the standards of OpenGIS Implementation Specification for Geographic information - Simple feature access for SQL. The results of the operations obtained can thereby be accessed dynamically via interactive Jupyter notebooks from any system which supports Python, without any additional dependencies and can also be combined with other open source libraries such as matplotlib and folium in-built within Jupyter notebooks for visualization purposes. We built a use case to analyse crime hotspots in New York city to validate our implementation and visualized the results as a choropleth map for each borough.
GPU-powered model analysis with PySB/cupSODA.
Harris, Leonard A; Nobile, Marco S; Pino, James C; Lubbock, Alexander L R; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo; Lopez, Carlos F
2017-11-01
A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework. For three example models of varying size, we show that for large numbers of simulations PySB/cupSODA achieves order-of-magnitude speedups relative to a CPU-based ordinary differential equation integrator. The PySB/cupSODA interface has been integrated into the PySB modeling framework (version 1.4.0), which can be installed from the Python Package Index (PyPI) using a Python package manager such as pip. cupSODA source code and precompiled binaries (Linux, Mac OS/X, Windows) are available at github.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus). Additional information about PySB is available at pysb.org. paolo.cazzaniga@unibg.it or c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Metz, P. A.
2014-12-01
Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.
pyCTQW: A continuous-time quantum walk simulator on distributed memory computers
NASA Astrophysics Data System (ADS)
Izaac, Josh A.; Wang, Jingbo B.
2015-01-01
In the general field of quantum information and computation, quantum walks are playing an increasingly important role in constructing physical models and quantum algorithms. We have recently developed a distributed memory software package pyCTQW, with an object-oriented Python interface, that allows efficient simulation of large multi-particle CTQW (continuous-time quantum walk)-based systems. In this paper, we present an introduction to the Python and Fortran interfaces of pyCTQW, discuss various numerical methods of calculating the matrix exponential, and demonstrate the performance behavior of pyCTQW on a distributed memory cluster. In particular, the Chebyshev and Krylov-subspace methods for calculating the quantum walk propagation are provided, as well as methods for visualization and data analysis.
graphkernels: R and Python packages for graph comparison
Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-01-01
Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902
graphkernels: R and Python packages for graph comparison.
Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-02-01
Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
Computational Workbench for Multibody Dynamics
NASA Technical Reports Server (NTRS)
Edmonds, Karina
2007-01-01
PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.
Investigating interoperability of the LSST data management software stack with Astropy
NASA Astrophysics Data System (ADS)
Jenness, Tim; Bosch, James; Owen, Russell; Parejko, John; Sick, Jonathan; Swinbank, John; de Val-Borro, Miguel; Dubois-Felsmann, Gregory; Lim, K.-T.; Lupton, Robert H.; Schellart, Pim; Krughoff, K. S.; Tollerud, Erik J.
2016-07-01
The Large Synoptic Survey Telescope (LSST) will be an 8.4m optical survey telescope sited in Chile and capable of imaging the entire sky twice a week. The data rate of approximately 15TB per night and the requirements to both issue alerts on transient sources within 60 seconds of observing and create annual data releases means that automated data management systems and data processing pipelines are a key deliverable of the LSST construction project. The LSST data management software has been in development since 2004 and is based on a C++ core with a Python control layer. The software consists of nearly a quarter of a million lines of code covering the system from fundamental WCS and table libraries to pipeline environments and distributed process execution. The Astropy project began in 2011 as an attempt to bring together disparate open source Python projects and build a core standard infrastructure that can be used and built upon by the astronomy community. This project has been phenomenally successful in the years since it has begun and has grown to be the de facto standard for Python software in astronomy. Astropy brings with it considerable expectations from the community on how astronomy Python software should be developed and it is clear that by the time LSST is fully operational in the 2020s many of the prospective users of the LSST software stack will expect it to be fully interoperable with Astropy. In this paper we describe the overlap between the LSST science pipeline software and Astropy software and investigate areas where the LSST software provides new functionality. We also discuss the possibilities of re-engineering the LSST science pipeline software to build upon Astropy, including the option of contributing affliated packages.
PYCHEM: a multivariate analysis package for python.
Jarvis, Roger M; Broadhurst, David; Johnson, Helen; O'Boyle, Noel M; Goodacre, Royston
2006-10-15
We have implemented a multivariate statistical analysis toolbox, with an optional standalone graphical user interface (GUI), using the Python scripting language. This is a free and open source project that addresses the need for a multivariate analysis toolbox in Python. Although the functionality provided does not cover the full range of multivariate tools that are available, it has a broad complement of methods that are widely used in the biological sciences. In contrast to tools like MATLAB, PyChem 2.0.0 is easily accessible and free, allows for rapid extension using a range of Python modules and is part of the growing amount of complementary and interoperable scientific software in Python based upon SciPy. One of the attractions of PyChem is that it is an open source project and so there is an opportunity, through collaboration, to increase the scope of the software and to continually evolve a user-friendly platform that has applicability across a wide range of analytical and post-genomic disciplines. http://sourceforge.net/projects/pychem
Bayesian Models for Astrophysical Data Using R, JAGS, Python, and Stan
NASA Astrophysics Data System (ADS)
Hilbe, Joseph M.; de Souza, Rafael S.; Ishida, Emille E. O.
2017-05-01
This comprehensive guide to Bayesian methods in astronomy enables hands-on work by supplying complete R, JAGS, Python, and Stan code, to use directly or to adapt. It begins by examining the normal model from both frequentist and Bayesian perspectives and then progresses to a full range of Bayesian generalized linear and mixed or hierarchical models, as well as additional types of models such as ABC and INLA. The book provides code that is largely unavailable elsewhere and includes details on interpreting and evaluating Bayesian models. Initial discussions offer models in synthetic form so that readers can easily adapt them to their own data; later the models are applied to real astronomical data. The consistent focus is on hands-on modeling, analysis of data, and interpretations that address scientific questions. A must-have for astronomers, its concrete approach will also be attractive to researchers in the sciences more generally.
Ecological correlates of invasion impact for Burmese pythons in Florida
Reed, R.N.; Willson, J.D.; Rodda, G.H.; Dorcas, M.E.
2012-01-01
An invasive population of Burmese pythons (Python molurus bivittatus) is established across several thousand square kilometers of southern Florida and appears to have caused precipitous population declines among several species of native mammals. Why has this giant snake had such great success as an invasive species when many established reptiles have failed to spread? We scored the Burmese python for each of 15 literature-based attributes relative to predefined comparison groups from a diverse range of taxa and provide a review of the natural history and ecology of Burmese pythons relevant to each attribute. We focused on attributes linked to spread and magnitude of impacts rather than establishment success. Our results suggest that attributes related to body size and generalism appeared to be particularly applicable to the Burmese python's success in Florida. The attributes with the highest scores were: high reproductive potential, low vulnerability to predation, large adult body size, large offspring size and high dietary breadth. However, attributes of ectotherms in general and pythons in particular (including predatory mode, energetic efficiency and social interactions) might have also contributed to invasion success. Although establishment risk assessments are an important initial step in prevention of new establishments, evaluating species in terms of their potential for spreading widely and negatively impacting ecosystems might become part of the means by which resource managers prioritize control efforts in environments with large numbers of introduced species.
Safdari, Reza; Maserat, Elham; Asadzadeh Aghdaei, Hamid; Javan Amoli, Amir Hossein; Mohaghegh Shalmani, Hamid
2017-01-01
To survey person centered survival rate in population based screening program by an intelligent clinical decision support system. Colorectal cancer is the most common malignancy and major cause of morbidity and mortality throughout the world. Colorectal cancer is the sixth leading cause of cancer death in Iran. In this survey, we used cosine similarity as data mining technique and intelligent system for estimating survival of at risk groups in the screening plan. In the first step, we determined minimum data set (MDS). MDS was approved by experts and reviewing literatures. In the second step, MDS were coded by python language and matched with cosine similarity formula. Finally, survival rate by percent was illustrated in the user interface of national intelligent system. The national intelligent system was designed in PyCharm environment. Main data elements of intelligent system consist demographic information, age, referral type, risk group, recommendation and survival rate. Minimum data set related to survival comprise of clinical status, past medical history and socio-demographic information. Information of the covered population as a comprehensive database was connected to intelligent system and survival rate estimated for each patient. Mean range of survival of HNPCC patients and FAP patients were respectively 77.7% and 75.1%. Also, the mean range of the survival rate and other calculations have changed with the entry of new patients in the CRC registry by real-time. National intelligent system monitors the entire of risk group and reports survival rates by electronic guidelines and data mining technique and also operates according to the clinical process. This web base software has a critical role in the estimation survival rate in order to health care planning.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
modlAMP: Python for antimicrobial peptides.
Müller, Alex T; Gabernet, Gisela; Hiss, Jan A; Schneider, Gisbert
2017-09-01
We have implemented the lecular esign aboratory's nti icrobial eptides package ( ), a Python-based software package for the design, classification and visual representation of peptide data. modlAMP offers functions for molecular descriptor calculation and the retrieval of amino acid sequences from public or local sequence databases, and provides instant access to precompiled datasets for machine learning. The package also contains methods for the analysis and representation of circular dichroism spectra. The modlAMP Python package is available under the BSD license from URL http://doi.org/10.5905/ethz-1007-72 or via pip from the Python Package Index (PyPI). gisbert.schneider@pharma.ethz.ch. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Joyce, William; Axelsson, Michael; Altimiras, Jordi; Wang, Tobias
2016-07-15
The ventricles of non-crocodilian reptiles are incompletely divided and provide an opportunity for mixing of oxygen-poor blood and oxygen-rich blood (intracardiac shunting). However, both cardiac morphology and in vivo shunting patterns exhibit considerable interspecific variation within reptiles. In the present study, we develop an in situ double-perfused heart approach to characterise the propensity and capacity for shunting in five reptile species: the turtle Trachemys scripta, the rock python Python sebae, the yellow anaconda Eunectes notaeus, the varanid lizard Varanus exanthematicus and the bearded dragon Pogona vitticeps To simulate changes in vascular bed resistance, pulmonary and systemic afterloads were independently manipulated and changes in blood flow distribution amongst the central outflow tracts were monitored. As previously demonstrated in Burmese pythons, rock pythons and varanid lizards exhibited pronounced intraventricular flow separation. As pulmonary or systemic afterload was raised, flow in the respective circulation decreased. However, flow in the other circulation, where afterload was constant, remained stable. This correlates with the convergent evolution of intraventricular pressure separation and the large intraventricular muscular ridge, which compartmentalises the ventricle, in these species. Conversely, in the three other species, the pulmonary and systemic flows were strongly mutually dependent, such that the decrease in pulmonary flow in response to elevated pulmonary afterload resulted in redistribution of perfusate to the systemic circuit (and vice versa). Thus, in these species, the muscular ridge appeared labile and blood could readily transverse the intraventricular cava. We conclude that relatively minor structural differences between non-crocodilian reptiles result in the fundamental changes in cardiac function. Further, our study emphasises that functionally similar intracardiac flow separation evolved independently in lizards (varanids) and snakes (pythons) from an ancestor endowed with the capacity for large intracardiac shunts. © 2016. Published by The Company of Biologists Ltd.
Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.
2011-01-01
This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).
Cao, Huojun; Amendt, Brad A
2016-11-01
Developmental dental anomalies are common forms of congenital defects. The molecular mechanisms of dental anomalies are poorly understood. Systematic approaches such as clustering genes based on similar expression patterns could identify novel genes involved in dental anomalies and provide a framework for understanding molecular regulatory mechanisms of these genes during tooth development (odontogenesis). A python package (pySAPC) of sparse affinity propagation clustering algorithm for large datasets was developed. Whole genome pair-wise similarity was calculated based on expression pattern similarity based on 45 microarrays of several stages during odontogenesis. pySAPC identified 743 gene clusters based on expression pattern similarity during mouse tooth development. Three clusters are significantly enriched for genes associated with dental anomalies (with FDR <0.1). The three clusters of genes have distinct expression patterns during odontogenesis. Clustering genes based on similar expression profiles recovered several known regulatory relationships for genes involved in odontogenesis, as well as many novel genes that may be involved with the same genetic pathways as genes that have already been shown to contribute to dental defects. By using sparse similarity matrix, pySAPC use much less memory and CPU time compared with the original affinity propagation program that uses a full similarity matrix. This python package will be useful for many applications where dataset(s) are too large to use full similarity matrix. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016. Published by Elsevier B.V.
SymPy: Symbolic computing in python
Meurer, Aaron; Smith, Christopher P.; Paprocki, Mateusz; ...
2017-01-02
Here, SymPy is a full featured computer algebra system (CAS) written in the Python programming language. It is open source, being licensed under the extremely permissive 3-clause BSD license. SymPy was started by Ondrej Certik in 2005, and it has since grown into a large open source project, with over 500 contributors.
Drewes, Rich; Zou, Quan; Goodman, Philip H
2009-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.
Drewes, Rich; Zou, Quan; Goodman, Philip H.
2008-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707
van Soldt, Benjamin J; Danielsen, Carl Christian; Wang, Tobias
2015-12-01
Pythons are unique amongst snakes in having different pressures in the aortas and pulmonary arteries because of intraventricular pressure separation. In this study, we investigate whether this correlates with different blood vessel strength in the ball python Python regius. We excised segments from the left, right, and dorsal aortas, and from the two pulmonary arteries. These were subjected to tensile testing. We show that the aortic vessel wall is significantly stronger than the pulmonary artery wall in P. regius. Gross morphological characteristics (vessel wall thickness and correlated absolute amount of collagen content) are likely the most influential factors. Collagen fiber thickness and orientation are likely to have an effect, though the effect of collagen fiber type and cross-links between fibers will need further study. © 2015 Wiley Periodicals, Inc.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Finch, N.; Clegg, S.; Graff, T.; Morris, R. V.; Laura, J.
2017-06-01
We present a Python-based library and graphical interface for the analysis of point spectra. The tool is being developed with a focus on methods used for ChemCam data, but is flexible enough to handle spectra from other instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2007-06-18
UEDGE is an interactive suite of physics packages using the Python or BASIS scripting systems. The plasma is described by time-dependent 2D plasma fluid equations that include equations for density, velocity, ion temperature, electron temperature, electrostatic potential, and gas density in the edge region of a magnetic fusion energy confinement device. Slab, cylindrical, and toroidal geometries are allowed, and closed and open magnetic field-line regions are included. Classical transport is assumed along magnetic field lines, and anomalous transport is assumed across field lines. Multi-charge state impurities can be included with the corresponding line-radiation energy loss. Although UEDGE is written inmore » Fortran, for efficient execution and analysis of results, it utilizes either Python or BASIS scripting shells. Python is easily available for many platforms (http://www.Python.org/). The features and availability of BASIS are described in "Basis Manual Set" by P.F. Dubois, Z.C. Motteler, et al., Lawrence Livermore National Laboratory report UCRL-MA-1 18541, June, 2002 and http://basis.llnl.gov. BASIS has been reviewed and released by LLNL for unlimited distribution. The Python version utilizes PYBASIS scripts developed by D.P. Grote, LLNL. The Python version also uses MPPL code and MAC Perl script, available from the public-domain BASIS source above. The Forthon version of UEDGE uses the same source files, but utilizes Forthon to produce a Python-compatible source. Forthon has been developed by D.P. Grote at LBL (see http://hifweb.lbl.gov/Forthon/ and Grote et al. in the references below), and it is freely available. The graphics can be performed by any package importable to Python, such as PYGIST.« less
Prevalence of Amblyomma gervaisi ticks on captive snakes in Tamil Nadu.
Catherine, B R; Jayathangaraj, M G; Soundararajan, C; Bala Guru, C; Yogaraj, D
2017-12-01
Ticks are the important ectoparasites that occur on snakes and transmit rickettsiosis, anaplasmosis and ehrlichiosis. A total of 62 snakes (Reticulated python, Indian Rock Python, Rat snakes and Spectacled cobra) were examined for tick infestation at Chennai Snake Park Trust (Guindy), Arignar Anna Zoological Park (Vandalur) and Rescue centre (Velachery) in Tamil Nadu from September, 2015 to June, 2016. Ticks from infested snakes were collected and were identified as Amblyomma gervaisi (previously known as Aponomma gervaisi ). Overall occurrence of tick infestation on snakes was 66.13%. Highest prevalence of tick infestation was observed more on Reticulated Python ( Python reticulatus , 90.91%) followed by Indian Rock Python ( Python molurus , 88.89%), Spectacled cobra ( Naja naja, 33.33%) and Rat snake ( Ptyas mucosa, 21.05%). Highest prevalence of ticks were observed on snakes reared at Chennai Snake Park Trust, Guindy (83.33%), followed by Arignar Anna Zoological Park, Vandalur (60.00%) and low level prevalence of 37.50% on snakes at Rescue centre, Velachery. Among the system of management, the prevalence of ticks were more on captive snakes (70.37%) than the free ranging snakes (37.5%). The presences of ticks were more on the first quarter when compared to other three quarters and were highly significant ( P ≤ 0.01).
Tongue worm (Pentastomida) infection in ball pythons (Python regius) – a case report
Gałęcki, Remigiusz; Sokół, Rajmund; Dudek, Agnieszka
Tongue worms (Pentastomida) are endoparasites causing pentastomiasis, an invasive disease representing a threat to exotic animals and humans. Animals acquire infection via the alimentary tract. In reptiles, the parasite is present in the lungs, resulting in symptoms from the respiratory system. Pentastomiasis may be asymptomatic, but nonspecific symptoms may occur at high parasite concentrations. Due to the harmful effects of many antiparasitic substances, tongue worm invasion in reptiles remains not fully treatable. Although pentasomiasis is rarely diagnosed in Poland, pentastomids were diagnosed in two ball pythons, who were patients of the “Poliklinika Weterynaryjna” veterinary clinic. They demonstrated problems with the respiratory system and a significant deterioration of health. Fenbendazole at a dose of 100 mg/kg b.w., repeated after 7 days was shown to be effective.
Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines
2017-06-24
Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
Schwartz, Yannick; Barbot, Alexis; Thyreau, Benjamin; Frouin, Vincent; Varoquaux, Gaël; Siram, Aditya; Marcus, Daniel S; Poline, Jean-Baptiste
2012-01-01
As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line.
Schwartz, Yannick; Barbot, Alexis; Thyreau, Benjamin; Frouin, Vincent; Varoquaux, Gaël; Siram, Aditya; Marcus, Daniel S.; Poline, Jean-Baptiste
2012-01-01
As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line. PMID:22654752
76 FR 40082 - Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
...; Constrictor Species From Python, Boa, and Eunectes Genera. Bureau of Ocean Energy Management, Regulation, and... Wildlife Evaluation; Constrictor Species from Python, Boa, and Eunectes Genera Legal Authority: 18 U.S.C... are: Indian python (including Burmese python), reticulated python, Northern African python, Southern...
A geometric approach to identify cavities in particle systems
NASA Astrophysics Data System (ADS)
Voyiatzis, Evangelos; Böhm, Michael C.; Müller-Plathe, Florian
2015-11-01
The implementation of a geometric algorithm to identify cavities in particle systems in an open-source python program is presented. The algorithm makes use of the Delaunay space tessellation. The present python software is based on platform-independent tools, leading to a portable program. Its successful execution provides information concerning the accessible volume fraction of the system, the size and shape of the cavities and the group of atoms forming each of them. The program can be easily incorporated into the LAMMPS software. An advantage of the present algorithm is that no a priori assumption on the cavity shape has to be made. As an example, the cavity size and shape distributions in a polyethylene melt system are presented for three spherical probe particles. This paper serves also as an introductory manual to the script. It summarizes the algorithm, its implementation, the required user-defined parameters as well as the format of the input and output files. Additionally, we demonstrate possible applications of our approach and compare its capability with the ones of well documented cavity size estimators.
Ecological correlates of invasion impact for Burmese pythons in Florida.
Reed, Robert N; Willson, John D; Rodda, Gordon H; Dorcas, Michael E
2012-09-01
An invasive population of Burmese pythons (Python molurus bivittatus) is established across several thousand square kilometers of southern Florida and appears to have caused precipitous population declines among several species of native mammals. Why has this giant snake had such great success as an invasive species when many established reptiles have failed to spread? We scored the Burmese python for each of 15 literature-based attributes relative to predefined comparison groups from a diverse range of taxa and provide a review of the natural history and ecology of Burmese pythons relevant to each attribute. We focused on attributes linked to spread and magnitude of impacts rather than establishment success. Our results suggest that attributes related to body size and generalism appeared to be particularly applicable to the Burmese python's success in Florida. The attributes with the highest scores were: high reproductive potential, low vulnerability to predation, large adult body size, large offspring size and high dietary breadth. However, attributes of ectotherms in general and pythons in particular (including predatory mode, energetic efficiency and social interactions) might have also contributed to invasion success. Although establishment risk assessments are an important initial step in prevention of new establishments, evaluating species in terms of their potential for spreading widely and negatively impacting ecosystems might become part of the means by which resource managers prioritize control efforts in environments with large numbers of introduced species. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.
Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors. Generic sampling design and terminology are proposed to standardize and clarify interpretations of eDNA-based occupancy models. PMID:25874630
Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors. Generic sampling design and terminology are proposed to standardize and clarify interpretations of eDNA-based occupancy models.
Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors. Generic sampling design and terminology are proposed to standardize and clarify interpretations of eDNA-based occupancy models.
Python for Information Theoretic Analysis of Neural Data
Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano
2008-01-01
Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557
A new open-source Python-based Space Weather data access, visualization, and analysis toolkit
NASA Astrophysics Data System (ADS)
de Larquier, S.; Ribeiro, A.; Frissell, N. A.; Spaleta, J.; Kunduri, B.; Thomas, E. G.; Ruohoniemi, J.; Baker, J. B.
2013-12-01
Space weather research relies heavily on combining and comparing data from multiple observational platforms. Current frameworks exist to aggregate some of the data sources, most based on file downloads via web or ftp interfaces. Empirical models are mostly fortran based and lack interfaces with more useful scripting languages. In an effort to improve data and model access, the SuperDARN community has been developing a Python-based Space Science Data Visualization Toolkit (DaViTpy). At the center of this development was a redesign of how our data (from 30 years of SuperDARN radars) was made available. Several access solutions are now wrapped into one convenient Python interface which probes local directories, a new remote NoSQL database, and an FTP server to retrieve the requested data based on availability. Motivated by the efficiency of this interface and the inherent need for data from multiple instruments, we implemented similar modules for other space science datasets (POES, OMNI, Kp, AE...), and also included fundamental empirical models with Python interfaces to enhance data analysis (IRI, HWM, MSIS...). All these modules and more are gathered in a single convenient toolkit, which is collaboratively developed and distributed using Github and continues to grow. While still in its early stages, we expect this toolkit will facilitate multi-instrument space weather research and improve scientific productivity.
Pyff - a pythonic framework for feedback applications and stimulus presentation in neuroscience.
Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S; Kramarek, Maria T; Müller, Klaus-Robert; Blankertz, Benjamin
2010-01-01
This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain-computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.
Pyff – A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience
Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S.; Kramarek, Maria T.; Müller, Klaus-Robert; Blankertz, Benjamin
2010-01-01
This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain–computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation. PMID:21160550
The instrument control software package for the Habitable-Zone Planet Finder spectrometer
NASA Astrophysics Data System (ADS)
Bender, Chad F.; Robertson, Paul; Stefansson, Gudmundur Kari; Monson, Andrew; Anderson, Tyler; Halverson, Samuel; Hearty, Frederick; Levi, Eric; Mahadevan, Suvrath; Nelson, Matthew; Ramsey, Larry; Roy, Arpita; Schwab, Christian; Shetrone, Matthew; Terrien, Ryan
2016-08-01
We describe the Instrument Control Software (ICS) package that we have built for The Habitable-Zone Planet Finder (HPF) spectrometer. The ICS controls and monitors instrument subsystems, facilitates communication with the Hobby-Eberly Telescope facility, and provides user interfaces for observers and telescope operators. The backend is built around the asynchronous network software stack provided by the Python Twisted engine, and is linked to a suite of custom hardware communication protocols. This backend is accessed through Python-based command-line and PyQt graphical frontends. In this paper we describe several of the customized subsystem communication protocols that provide access to and help maintain the hardware systems that comprise HPF, and show how asynchronous communication benefits the numerous hardware components. We also discuss our Detector Control Subsystem, built as a set of custom Python wrappers around a C-library that provides native Linux access to the SIDECAR ASIC and Hawaii-2RG detector system used by HPF. HPF will be one of the first astronomical instruments on sky to utilize this native Linux capability through the SIDECAR Acquisition Module (SAM) electronics. The ICS we have created is very flexible, and we are adapting it for NEID, NASA's Extreme Precision Doppler Spectrometer for the WIYN telescope; we will describe this adaptation, and describe the potential for use in other astronomical instruments.
OpenSesame: an open-source, graphical experiment builder for the social sciences.
Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan
2012-06-01
In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.
PyPDB: a Python API for the Protein Data Bank.
Gilpin, William
2016-01-01
We have created a Python programming interface for the RCSB Protein Data Bank (PDB) that allows search and data retrieval for a wide range of result types, including BLAST and sequence motif queries. The API relies on the existing XML-based API and operates by creating custom XML requests from native Python types, allowing extensibility and straightforward modification. The package has the ability to perform many types of advanced search of the PDB that are otherwise only available through the PDB website. PyPDB is implemented exclusively in Python 3 using standard libraries for maximal compatibility. The most up-to-date version, including iPython notebooks containing usage tutorials, is available free-of-charge under an open-source MIT license via GitHub at https://github.com/williamgilpin/pypdb, and the full API reference is at http://williamgilpin.github.io/pypdb_docs/html/. The latest stable release is also available on PyPI. wgilpin@stanford.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
pypet: A Python Toolkit for Data Management of Parameter Explorations
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080
p3d--Python module for structural bioinformatics.
Fufezan, Christian; Specht, Michael
2009-08-21
High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
Photometric Calibration of the Gemini South Adaptive Optics Imager
NASA Astrophysics Data System (ADS)
Stevenson, Sarah Anne; Rodrigo Carrasco Damele, Eleazar; Thomas-Osip, Joanna
2017-01-01
The Gemini South Adaptive Optics Imager (GSAOI) is an instrument available on the Gemini South telescope at Cerro Pachon, Chile, utilizing the Gemini Multi-Conjugate Adaptive Optics System (GeMS). In order to allow users to easily perform photometry with this instrument and to monitor any changes in the instrument in the future, we seek to set up a process for performing photometric calibration with standard star observations taken across the time of the instrument’s operation. We construct a Python-based pipeline that includes IRAF wrappers for reduction and combines the AstroPy photutils package and original Python scripts with the IRAF apphot and photcal packages to carry out photometry and linear regression fitting. Using the pipeline, we examine standard star observations made with GSAOI on 68 nights between 2013 and 2015 in order to determine the nightly photometric zero points in the J, H, Kshort, and K bands. This work is based on observations obtained at the Gemini Observatory, processed using the Gemini IRAF and gemini_python packages, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).
Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing
NASA Astrophysics Data System (ADS)
Tang, Jingyin; Matyas, Corene J.
2018-02-01
Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.
GMES: A Python package for solving Maxwell’s equations using the FDTD method
NASA Astrophysics Data System (ADS)
Chun, Kyungwon; Kim, Huioon; Kim, Hyounggyu; Jung, Kil Su; Chung, Youngjoo
2013-04-01
This paper describes GMES, a free Python package for solving Maxwell’s equations using the finite-difference time-domain (FDTD) method. The design of GMES follows the object-oriented programming (OOP) approach and adopts a unique design strategy where the voxels in the computational domain are grouped and then updated according to its material type. This piecewise updating scheme ensures that GMES can adopt OOP without losing its simple structure and time-stepping speed. The users can easily add various material types, sources, and boundary conditions into their code using the Python programming language. The key design features, along with the supported material types, excitation sources, boundary conditions and parallel calculations employed in GMES are also described in detail. Catalog identifier: AEOK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOK_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 17700 No. of bytes in distributed program, including test data, etc.: 89878 Distribution format: tar.gz Programming language: C++, Python. Computer: Any computer with a Unix-like system with a C++ compiler, and a Python interpreter; developed on 2.53 GHz Intel CoreTM i3. Operating system: Any Unix-like system; developed under Ubuntu 12.04 LTS 64 bit. Has the code been vectorized or parallelized?: Yes. Parallelized with MPI directives (optional). RAM: Problem dependent (a simulation with real valued electromagnetic field uses roughly 0.18 KB per Yee cell.) Classification: 10. External routines: SWIG [1], Cython [2], NumPy [3], SciPy [4], matplotlib [5], MPI for Python [6] Nature of problem: Classical electrodynamics Solution method: Finite-difference time-domain (FDTD) method Additional comments: This article describes version 0.9.5. The most recent version can be downloaded at the GMES project homepage [7]. Running time: Problem dependent (a simulation with real valued electromagnetic field takes typically about 0.16 μs per Yee cell per time-step.) SWIG, http://www.swig.org. Cython, http://www.cython.org. NumPy, http://numpy.scipy.org. SciPy, http://www.scipy.org. matplotlib, http://matplotlib.sourceforge.net. MPI for Python, http://mpi4py.scipy.org. GMES, http://sourceforge.net/projects/gmes.
PyGPlates - a GPlates Python library for data analysis through space and deep geological time
NASA Astrophysics Data System (ADS)
Williams, Simon; Cannon, John; Qin, Xiaodong; Müller, Dietmar
2017-04-01
A fundamental consideration for studying the Earth through deep time is that the configurations of the continents, tectonic plates, and plate boundaries are continuously changing. Within a diverse range of fields including geodynamics, paleoclimate, and paleobiology, the importance of considering geodata in their reconstructed context across previous cycles of supercontinent aggregation, dispersal and ocean basin evolution is widely recognised. Open-source software tools such as GPlates provide paleo-geographic information systems for geoscientists to combine a wide variety of geodata and examine them within tectonic reconstructions through time. The availability of such powerful tools also brings new challenges - we want to learn something about the key associations between reconstructed plate motions and the geological record, but the high-dimensional parameter space is difficult for a human being to visually comprehend and quantify these associations. To achieve true spatio-temporal data-mining, new tools are needed. Here, we present a further development of the GPlates ecosystem - a Python-based tool for geotectonic analysis. In contrast to existing GPlates tools that are built around a graphical user interface (GUI) and interactive visualisation, pyGPlates offers a programming interface for the automation of quantitative plate tectonic analysis or arbitrary complexity. The vast array of open-source Python-based tools for data-mining, statistics and machine learning can now be linked to pyGPlates, allowing spatial data to be seamlessly analysed in space and geological "deep time", and with the ability to spread large computations across multiple processors. The presentation will illustrate a range of example applications, both simple and advanced. Basic examples include data querying, filtering, and reconstruction, and file-format conversions. For the innovative study of plate kinematics, pyGPlates has been used to explore the relationships between absolute plate motions, subduction zone kinematics, and mid-ocean ridge migration and orientation through deep time; to investigate the systematics of continental rift velocity evolution during Pangea breakup; and to make connections between kinematics of the Andean subduction zone and ore deposit formation. To support the numerical modelling community, pyGPlates facilitates the connection between tectonic surface boundary conditions contained within plate tectonic reconstructions (plate boundary configurations and plate velocities) and simulations such as thermo-mechanical models of lithospheric deformation and mantle convection. To support the development of web-based applications that can serve the wider geoscience community, we will demonstrate how pyGPlates can be combined with other open-source tools to serve alternative reconstructions together with a diverse array of reconstructed data sets in a self-consistent framework over the internet. PyGPlates is available to the public via the GPlates web site and contains comprehensive documentation covering installation on Windows/Mac/Linux platforms, sample code, tutorials and a detailed reference of pyGPlates functions and classes.
Teaching Introductory GIS Programming to Geographers Using an Open Source Python Approach
ERIC Educational Resources Information Center
Etherington, Thomas R.
2016-01-01
Computer programming is not commonly taught to geographers as a part of geographic information system (GIS) courses, but the advent of NeoGeography, big data and open GIS means that programming skills are becoming more important. To encourage the teaching of programming to geographers, this paper outlines a course based around a series of…
NASA Astrophysics Data System (ADS)
Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry
2010-05-01
The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development
HydroUnits: A Python-based Physical Units Management Tool in Hydrologic Computing Systems
NASA Astrophysics Data System (ADS)
Celicourt, P.; Piasecki, M.
2015-12-01
While one objective of data management systems is to provide the units when annotating the collected data, another is that the units must be correctly manipulated during conversion steps. This is not a trivial task however and the units conversion time and errors for large datasets can be quite expensive. To date, more than a dozen Python modules have been developed to deal with units attached to quantities. However, they fall short in many ways and also suffer from not integrating with a units controlled vocabulary. Moreover, none of them permits the encoding of some complex units defined in the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc.'s Observations Data Model (CUAHSI ODM) as a vectorial representation for storage demand reduction and does not incorporate provision to accommodate unforeseen standards-based units. We developed HydroUnits, a Python-based units management tool for three specific purposes: encoding of physical units in the Transducer Electronic Data Sheet (TEDS) as defined in the IEEE 1451.0 standard, performing dimensional analysis and on-the-fly conversion of time series allowing users to retrieve data from a data source in a desired equivalent unit while accommodating unforeseen and user-defined units. HydroUnits differentiates itself to existing tools by a number of factors including the implementation approach adopted, the adoption of standard-based units naming conventions and more importantly the emphasis on units controlled vocabularies which are a critical aspect of units treatment. Additionally, HydroUnits supports unit conversion for quantities with additive scaling factor, and natively supports time series conversion and takes leap years into consideration for units consisting of the time dimension (e.g., month, minute). Due to its overall implementation approach, HydroUnits exhibits a high level of versatility that no other tool we are aware of has achieved.
Design and implementation of a risk assessment module in a spatial decision support system
NASA Astrophysics Data System (ADS)
Zhang, Kaixi; van Westen, Cees; Bakker, Wim
2014-05-01
The spatial decision support system named 'Changes SDSS' is currently under development. The goal of this system is to analyze changing hydro-meteorological hazards and the effect of risk reduction alternatives to support decision makers in choosing the best alternatives. The risk assessment module within the system is to assess the current risk, analyze the risk after implementations of risk reduction alternatives, and analyze the risk in different future years when considering scenarios such as climate change, land use change and population growth. The objective of this work is to present the detailed design and implementation plan of the risk assessment module. The main challenges faced consist of how to shift the risk assessment from traditional desktop software to an open source web-based platform, the availability of input data and the inclusion of uncertainties in the risk analysis. The risk assessment module is developed using Ext JS library for the implementation of user interface on the client side, using Python for scripting, as well as PostGIS spatial functions for complex computations on the server side. The comprehensive consideration of the underlying uncertainties in input data can lead to a better quantification of risk assessment and a more reliable Changes SDSS, since the outputs of risk assessment module are the basis for decision making module within the system. The implementation of this module will contribute to the development of open source web-based modules for multi-hazard risk assessment in the future. This work is part of the "CHANGES SDSS" project, funded by the European Community's 7th Framework Program.
Nunez-Iglesias, Juan; Blanch, Adam J; Looker, Oliver; Dixon, Matthew W; Tilley, Leann
2018-01-01
We present Skan (Skeleton analysis), a Python library for the analysis of the skeleton structures of objects. It was inspired by the "analyse skeletons" plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API) allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan's measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM)-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions.
Looker, Oliver; Dixon, Matthew W.; Tilley, Leann
2018-01-01
We present Skan (Skeleton analysis), a Python library for the analysis of the skeleton structures of objects. It was inspired by the “analyse skeletons” plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API) allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan’s measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM)-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions. PMID:29472997
Type Theory, Computation and Interactive Theorem Proving
2015-09-01
postdoc Cody Roux, to develop new methods of verifying real-valued inequalities automatically. They developed a prototype implementation in Python [8] (an...he has developed new heuristic, geometric methods of verifying real-valued inequalities. A python -based implementation has performed surprisingly...express complex mathematical and computational assertions. In this project, Avigad and Harper developed type-theoretic algorithms and formalisms that
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
2017-02-01
note, a number of different measures implemented in both MATLAB and Python as functions are used to quantify similarity/distance between 2 vector-based...this technical note are widely used and may have an important role when computing the distance and similarity of large datasets and when considering high...throughput processes. In this technical note, a number of different measures implemented in both MAT- LAB and Python as functions are used to
Calculations of lattice vibrational mode lifetimes using Jazz: a Python wrapper for LAMMPS
NASA Astrophysics Data System (ADS)
Gao, Y.; Wang, H.; Daw, M. S.
2015-06-01
Jazz is a new python wrapper for LAMMPS [1], implemented to calculate the lifetimes of vibrational normal modes based on forces as calculated for any interatomic potential available in that package. The anharmonic character of the normal modes is analyzed via the Monte Carlo-based moments approximation as is described in Gao and Daw [2]. It is distributed as open-source software and can be downloaded from the website http://jazz.sourceforge.net/.
PyNEST: A Convenient Interface to the NEST Simulator.
Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver
2008-01-01
The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.
PyNEST: A Convenient Interface to the NEST Simulator
Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver
2008-01-01
The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used. PMID:19198667
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.
Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-05-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT
Schenk, Andreas D.; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-01-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library & Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. PMID:23500887
Temporal and spatial complexity of maternal thermoregulation in tropical pythons.
Stahlschmidt, Zachary Ross; Shine, Richard; Denardo, Dale F
2012-01-01
Parental care is a widespread adaptation that evolved independently in a broad range of taxa. Although the dynamics by which two parents meet the developmental needs of offspring are well studied in birds, we lack understanding about the temporal and spatial complexity of parental care in taxa exhibiting female-only care, the predominant mode of parental care. Thus, we examined the behavioral and physiological mechanisms by which female water pythons Liasis fuscus meet a widespread developmental need (thermoregulation) in a natural setting. Although female L. fuscus were not facultatively thermogenic, they did use behaviors on multiple spatial scales (e.g., shifts in egg-brooding postures and surface activity patterns) to balance the thermal needs of their offspring throughout reproduction (gravidity and egg brooding). Maternal behaviors in L. fuscus varied by stage within reproduction and were mediated by interindividual variation in body size and fecundity. Female pythons with relatively larger clutch sizes were cooler during egg brooding, suggesting a trade-off between reproductive quantity (size of clutch) and quality (developmental temperature). In nature, caregiving parents of all taxa must navigate both extrinsic factors (temporal and spatial complexity) and intrinsic factors (body size and fecundity) to meet the needs of their offspring. Our study used a comprehensive approach that can be used as a general template for future research examining the dynamics by which parents meet other developmental needs (e.g., predation risk or energy balance).
Boughner, Julia C; Buchtová, Marcela; Fu, Katherine; Diewert, Virginia; Hallgrímsson, Benedikt; Richman, Joy M
2007-01-01
This study explores the post-ovipositional craniofacial development of the African Rock Python (Python sebae). We first describe a staging system based on external characteristics and next use whole-mount skeletal staining supplemented with Computed tomography (CT) scanning to examine skeletal development. Our results show that python embryos are in early stages of organogenesis at the time of laying, with separate facial prominences and pharyngeal clefts still visible. Limb buds are also visible. By 11 days (stage 3), the chondrocranium is nearly fully formed; however, few intramembranous bones can be detected. One week later (stage 4), many of the intramembranous upper and lower jaw bones are visible but the calvaria are not present. Skeletal elements in the limbs also begin to form. Between stages 4 (day 18) and 7 (day 44), the complete set of intramembranous bones in the jaws and calvaria develops. Hindlimb development does not progress beyond stage 6 (33 days) and remains rudimentary throughout adult life. In contrast to other reptiles, there are two rows of teeth in the upper jaw. The outer tooth row is attached to the maxillary and premaxillary bones, whereas the inner row is attached to the pterygoid and palatine bones. Erupted teeth can be seen in whole-mount stage 10 specimens and are present in an unerupted, mineralized state at stage 7. Micro-CT analysis reveals that all the young membranous bones can be recognized even out of the context of the skull. These data demonstrate intrinsic patterning of the intramembranous bones, even though they form without a cartilaginous template. In addition, intramembranous bone morphology is established prior to muscle function, which can influence bone shape through differential force application. After careful staging, we conclude that python skeletal development occurs slowly enough to observe in good detail the early stages of craniofacial skeletogenesis. Thus, reptilian animal models will offer unique opportunities for understanding the early influences that contribute to perinatal bone shape.
Buchtová, Marcela; Boughner, Julia C; Fu, Katherine; Diewert, Virginia M; Richman, Joy M
2007-01-01
This study explores the microscopic craniofacial morphogenesis of the oviparous African rock python (Python sebae) spanning the first two-thirds of the post-oviposition period. At the time of laying, the python embryo consists of largely undifferentiated mesenchyme and epithelium with the exception of the cranial base and trabeculae cranii, which are undergoing chondrogenesis. The facial prominences are well defined and are at a late stage, close to the time when lip fusion begins. Later (11-12d), specializations in the epithelia begin to differentiate (vomeronasal and olfactory epithelia, teeth). Dental development in snakes is different from that of mammals in several aspects including an extended dental lamina with the capacity to form 4 sets of generational teeth. In addition, the ophidian olfactory system is very different from the mammalian. There is a large vomeronasal organ, a nasal cavity proper and an extraconchal space. All of these areas are lined with a greatly expanded olfactory epithelium. Intramembranous bone differentiation is taking place at stage 3 with some bones already ossifying whereas most are only represented as mesenchymal condensations. In addition to routine histological staining, PCNA immunohistochemistry reveals relatively higher levels of proliferation in the extending dental laminae, in osseous mesenchymal condensations and in the olfactory epithelia. Areas undergoing apoptosis were noted in the enamel organs of the teeth and osseous mesenchymal condensations. We propose that localized apoptosis helps to divide a single condensation into multiple ossification centres and this is a mechanism whereby novel morphology can be selected in response to evolutionary pressures. Several additional differences in head morphology between snakes and other amniotes were noted including a palatal groove separating the inner and outer row of teeth in the upper jaw, a tracheal opening within the tongue and a pharyngeal adhesion that closes off the pharynx from the oral cavity between stages 1 and 4. Our studies on these and other differences in the python will provide valuable insights into in developmental, molecular and evolutionary mechanisms of patterning.
Python as a federation tool for GENESIS 3.0.
Cornelis, Hugo; Rodriguez, Armando L; Coop, Allan D; Bower, James M
2012-01-01
The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be 'glued' together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience.
Python as a Federation Tool for GENESIS 3.0
Cornelis, Hugo; Rodriguez, Armando L.; Coop, Allan D.; Bower, James M.
2012-01-01
The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be ‘glued’ together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience. PMID:22276101
Light-weight Parallel Python Tools for Earth System Modeling Workflows
NASA Astrophysics Data System (ADS)
Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.
2015-12-01
With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.
The Lake Tahoe Basin Land Use Simulation Model
Forney, William M.; Oldham, I. Benson
2011-01-01
This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.
AESOP: A Python Library for Investigating Electrostatics in Protein Interactions.
Harrison, Reed E S; Mohan, Rohith R; Gorham, Ronald D; Kieslich, Chris A; Morikis, Dimitrios
2017-05-09
Electric fields often play a role in guiding the association of protein complexes. Such interactions can be further engineered to accelerate complex association, resulting in protein systems with increased productivity. This is especially true for enzymes where reaction rates are typically diffusion limited. To facilitate quantitative comparisons of electrostatics in protein families and to describe electrostatic contributions of individual amino acids, we previously developed a computational framework called AESOP. We now implement this computational tool in Python with increased usability and the capability of performing calculations in parallel. AESOP utilizes PDB2PQR and Adaptive Poisson-Boltzmann Solver to generate grid-based electrostatic potential files for protein structures provided by the end user. There are methods within AESOP for quantitatively comparing sets of grid-based electrostatic potentials in terms of similarity or generating ensembles of electrostatic potential files for a library of mutants to quantify the effects of perturbations in protein structure and protein-protein association. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Weather forecasting with open source software
NASA Astrophysics Data System (ADS)
Rautenhaus, Marc; Dörnbrack, Andreas
2013-04-01
To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.
COSMOS: Python library for massively parallel workflows
Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.
2014-01-01
Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428
COSMOS: Python library for massively parallel workflows.
Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J
2014-10-15
Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
2015-06-01
unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python
77 FR 7968 - Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
...; Constrictor Species From Python, Boa, and Eunectes Genera. National Park Service--Proposed Rule Stage... Evaluation; Constrictor Species From Python, Boa, and Eunectes Genera Legal Authority: 18 U.S.C. 42 Abstract... wildlife under the Lacey Act: Indian python (including Burmese python), reticulated python, Northern...
Towards a new Mercator Observatory Control System
NASA Astrophysics Data System (ADS)
Pessemier, W.; Raskin, G.; Prins, S.; Saey, P.; Merges, F.; Padilla, J. P.; Van Winckel, H.; Waelkens, C.
2010-07-01
A new control system is currently being developed for the 1.2-meter Mercator Telescope at the Roque de Los Muchachos Observatory (La Palma, Spain). Formerly based on transputers, the new Mercator Observatory Control System (MOCS) consists of a small network of Linux computers complemented by a central industrial controller and an industrial real-time data communication network. Python is chosen as the high-level language to develop flexible yet powerful supervisory control and data acquisition (SCADA) software for the Linux computers. Specialized applications such as detector control, auto-guiding and middleware management are also integrated in the same Python software package. The industrial controller, on the other hand, is connected to the majority of the field devices and is targeted to run various control loops, some of which are real-time critical. Independently of the Linux distributed control system (DCS), this controller makes sure that high priority tasks such as the telescope motion, mirror support and hydrostatic bearing control are carried out in a reliable and safe way. A comparison is made between different controller technologies including a LabVIEW embedded system, a PROFINET Programmable Logic Controller (PLC) and motion controller, and an EtherCAT embedded PC (soft-PLC). As the latter is chosen as the primary platform for the lower level control, a substantial part of the software is being ported to the IEC 61131-3 standard programming languages. Additionally, obsolete hardware is gradually being replaced by standard industrial alternatives with fast EtherCAT communication. The use of Python as a scripting language allows a smooth migration to the final MOCS: finished parts of the new control system can readily be commissioned to replace the corresponding transputer units of the old control system with minimal downtime. In this contribution, we give an overview of the systems design, implementation details and the current status of the project.
Bednar, James A.
2008-01-01
Many neural regions are arranged into two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered by the lack of appropriate tools. It has been particularly difficult to bridge across levels of detail, because simulators are typically geared to a specific level, while interfacing between simulators has been a major technical challenge. In this paper, we show that the Python-based Topographica simulator makes it straightforward to build systems that cross levels of analysis, as well as providing a common framework for evaluating and comparing models implemented in other simulators. These results rely on the general-purpose abstractions around which Topographica is designed, along with the Python interfaces becoming available for many simulators. In particular, we present a detailed, general-purpose example of how to wrap an external spiking PyNN/NEST simulation as a Topographica component using only a dozen lines of Python code, making it possible to use any of the extensive input presentation, analysis, and plotting tools of Topographica. Additional examples show how to interface easily with models in other types of simulators. Researchers simulating topographic maps externally should consider using Topographica's analysis tools (such as preference map, receptive field, or tuning curve measurement) to compare results consistently, and for connecting models at different levels. This seamless interoperability will help neuroscientists and computational scientists to work together to understand how neurons in topographic maps organize and operate. PMID:19352443
2011-09-01
rate Python’s maturity as “High.” Python is nine years old and has been continuously been developed and enhanced since then. During fiscal year 2010...We rate Python’s developer toolkit availability/extensibility as “Yes.” Python runs on a SQL database and is 64 compatible with Oracle database...MODEL...........................................................................11 D. GOAL DEVELOPMENT
Marsh rabbit mortalities tie pythons to the precipitous decline of mammals in the Everglades
McCleery, Robert A.; Sovie, Adia; Reed, Robert N.; Cunningham, Mark W.; Hunter, Margaret E.; Hart, Kristen M.
2015-01-01
To address the ongoing debate over the impact of invasive species on native terrestrial wildlife, we conducted a large-scale experiment to test the hypothesis that invasive Burmese pythons (Python molurus bivittatus) were a cause of the precipitous decline of mammals in Everglades National Park (ENP). Evidence linking pythons to mammal declines has been indirect and there are reasons to question whether pythons, or any predator, could have caused the precipitous declines seen across a range of mammalian functional groups. Experimentally manipulating marsh rabbits, we found that pythons accounted for 77% of rabbit mortalities within 11 months of their translocation to ENP and that python predation appeared to preclude the persistence of rabbit populations in ENP. On control sites, outside of the park, no rabbits were killed by pythons and 71% of attributable marsh rabbit mortalities were classified as mammal predations. Burmese pythons pose a serious threat to the faunal communities and ecological functioning of the Greater Everglades Ecosystem, which will probably spread as python populations expand their range.
Molecular Identification of Cryptosporidium Species from Pet Snakes in Thailand.
Yimming, Benjarat; Pattanatanang, Khampee; Sanyathitiseree, Pornchai; Inpankaew, Tawin; Kamyingkird, Ketsarin; Pinyopanuwat, Nongnuch; Chimnoi, Wissanuwat; Phasuk, Jumnongjit
2016-08-01
Cryptosporidium is an important pathogen causing gastrointestinal disease in snakes and is distributed worldwide. The main objectives of this study were to detect and identify Cryptosporidium species in captive snakes from exotic pet shops and snake farms in Thailand. In total, 165 fecal samples were examined from 8 snake species, boa constrictor (Boa constrictor constrictor), corn snake (Elaphe guttata), ball python (Python regius), milk snake (Lampropeltis triangulum), king snake (Lampropeltis getula), rock python (Python sebae), rainbow boa (Epicrates cenchria), and carpet python (Morelia spilota). Cryptosporidium oocysts were examined using the dimethyl sulfoxide (DMSO)-modified acid-fast staining and a molecular method based on nested-PCR, PCR-RFLP analysis, and sequencing amplification of the SSU rRNA gene. DMSO-modified acid-fast staining revealed the presence of Cryptosporidium oocysts in 12 out of 165 (7.3%) samples, whereas PCR produced positive results in 40 (24.2%) samples. Molecular characterization indicated the presence of Cryptosporidium parvum (mouse genotype) as the most common species in 24 samples (60%) from 5 species of snake followed by Cryptosporidium serpentis in 9 samples (22.5%) from 2 species of snake and Cryptosporidium muris in 3 samples (7.5%) from P. regius.
Molecular Identification of Cryptosporidium Species from Pet Snakes in Thailand
Yimming, Benjarat; Pattanatanang, Khampee; Sanyathitiseree, Pornchai; Inpankaew, Tawin; Kamyingkird, Ketsarin; Pinyopanuwat, Nongnuch; Chimnoi, Wissanuwat; Phasuk, Jumnongjit
2016-01-01
Cryptosporidium is an important pathogen causing gastrointestinal disease in snakes and is distributed worldwide. The main objectives of this study were to detect and identify Cryptosporidium species in captive snakes from exotic pet shops and snake farms in Thailand. In total, 165 fecal samples were examined from 8 snake species, boa constrictor (Boa constrictor constrictor), corn snake (Elaphe guttata), ball python (Python regius), milk snake (Lampropeltis triangulum), king snake (Lampropeltis getula), rock python (Python sebae), rainbow boa (Epicrates cenchria), and carpet python (Morelia spilota). Cryptosporidium oocysts were examined using the dimethyl sulfoxide (DMSO)-modified acid-fast staining and a molecular method based on nested-PCR, PCR-RFLP analysis, and sequencing amplification of the SSU rRNA gene. DMSO-modified acid-fast staining revealed the presence of Cryptosporidium oocysts in 12 out of 165 (7.3%) samples, whereas PCR produced positive results in 40 (24.2%) samples. Molecular characterization indicated the presence of Cryptosporidium parvum (mouse genotype) as the most common species in 24 samples (60%) from 5 species of snake followed by Cryptosporidium serpentis in 9 samples (22.5%) from 2 species of snake and Cryptosporidium muris in 3 samples (7.5%) from P. regius. PMID:27658593
Reed, R.N.; Hart, K.M.; Rodda, G.H.; Mazzotti, F.J.; Snow, R.W.; Cherkiss, M.; Rozar, R.; Goetz, S.
2011-01-01
Context. Invasive Burmese pythons (Python molurus bivittatus) are established over thousands of square kilometres of southern Florida, USA, and consume a wide range of native vertebrates. Few tools are available to control the python population, and none of the available tools have been validated in the field to assess capture success as a proportion of pythons available to be captured. Aims. Our primary aim was to conduct a trap trial for capturing invasive pythons in an area east of Everglades National Park, where many pythons had been captured in previous years, to assess the efficacy of traps for population control.Wealso aimed to compare results of visual surveys with trap capture rates, to determine capture rates of non-target species, and to assess capture rates as a proportion of resident pythons in the study area. Methods.Weconducted a medium-scale (6053 trap nights) experiment using two types of attractant traps baited with live rats in the Frog Pond area east of Everglades National Park.Wealso conducted standardised and opportunistic visual surveys in the trapping area. Following the trap trial, the area was disc harrowed to expose pythons and allow calculation of an index of the number of resident pythons. Key results. We captured three pythons and 69 individuals of various rodent, amphibian, and reptile species in traps. Eleven pythons were discovered during disc harrowing operations, as were large numbers of rodents. Conclusions. The trap trial captured a relatively small proportion of the pythons that appeared to be present in the study area, although previous research suggests that trap capture rates improve with additional testing of alternative trap designs. Potential negative impacts to non-target species were minimal. Low python capture rates may have been associated with extremely high local prey abundances during the trap experiment. Implications. Results of this trial illustrate many of the challenges in implementing and interpreting results from tests of control tools for large cryptic predators such as Burmese pythons. ?? CSIRO 2011.
BioC implementations in Go, Perl, Python and Ruby
Liu, Wanli; Islamaj Doğan, Rezarta; Kwon, Dongseop; Marques, Hernani; Rinaldi, Fabio; Wilbur, W. John; Comeau, Donald C.
2014-01-01
As part of a communitywide effort for evaluating text mining and information extraction systems applied to the biomedical domain, BioC is focused on the goal of interoperability, currently a major barrier to wide-scale adoption of text mining tools. BioC is a simple XML format, specified by DTD, for exchanging data for biomedical natural language processing. With initial implementations in C++ and Java, BioC provides libraries of code for reading and writing BioC text documents and annotations. We extend BioC to Perl, Python, Go and Ruby. We used SWIG to extend the C++ implementation for Perl and one Python implementation. A second Python implementation and the Ruby implementation use native data structures and libraries. BioC is also implemented in the Google language Go. BioC modules are functional in all of these languages, which can facilitate text mining tasks. BioC implementations are freely available through the BioC site: http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net/ PMID:24961236
ChemoPy: freely available python package for computational biology and chemoinformatics.
Cao, Dong-Sheng; Xu, Qing-Song; Hu, Qian-Nan; Liang, Yi-Zeng
2013-04-15
Molecular representation for small molecules has been routinely used in QSAR/SAR, virtual screening, database search, ranking, drug ADME/T prediction and other drug discovery processes. To facilitate extensive studies of drug molecules, we developed a freely available, open-source python package called chemoinformatics in python (ChemoPy) for calculating the commonly used structural and physicochemical features. It computes 16 drug feature groups composed of 19 descriptors that include 1135 descriptor values. In addition, it provides seven types of molecular fingerprint systems for drug molecules, including topological fingerprints, electro-topological state (E-state) fingerprints, MACCS keys, FP4 keys, atom pairs fingerprints, topological torsion fingerprints and Morgan/circular fingerprints. By applying a semi-empirical quantum chemistry program MOPAC, ChemoPy can also compute a large number of 3D molecular descriptors conveniently. The python package, ChemoPy, is freely available via http://code.google.com/p/pychem/downloads/list, and it runs on Linux and MS-Windows. Supplementary data are available at Bioinformatics online.
The Burmese python genome reveals the molecular basis for extreme adaptation in snakes
Castoe, Todd A.; de Koning, A. P. Jason; Hall, Kathryn T.; Card, Daren C.; Schield, Drew R.; Fujita, Matthew K.; Ruggiero, Robert P.; Degner, Jack F.; Daza, Juan M.; Gu, Wanjun; Reyes-Velasco, Jacobo; Shaney, Kyle J.; Castoe, Jill M.; Fox, Samuel E.; Poole, Alex W.; Polanco, Daniel; Dobry, Jason; Vandewege, Michael W.; Li, Qing; Schott, Ryan K.; Kapusta, Aurélie; Minx, Patrick; Feschotte, Cédric; Uetz, Peter; Ray, David A.; Hoffmann, Federico G.; Bogden, Robert; Smith, Eric N.; Chang, Belinda S. W.; Vonk, Freek J.; Casewell, Nicholas R.; Henkel, Christiaan V.; Richardson, Michael K.; Mackessy, Stephen P.; Bronikowski, Anne M.; Yandell, Mark; Warren, Wesley C.; Secor, Stephen M.; Pollock, David D.
2013-01-01
Snakes possess many extreme morphological and physiological adaptations. Identification of the molecular basis of these traits can provide novel understanding for vertebrate biology and medicine. Here, we study snake biology using the genome sequence of the Burmese python (Python molurus bivittatus), a model of extreme physiological and metabolic adaptation. We compare the python and king cobra genomes along with genomic samples from other snakes and perform transcriptome analysis to gain insights into the extreme phenotypes of the python. We discovered rapid and massive transcriptional responses in multiple organ systems that occur on feeding and coordinate major changes in organ size and function. Intriguingly, the homologs of these genes in humans are associated with metabolism, development, and pathology. We also found that many snake metabolic genes have undergone positive selection, which together with the rapid evolution of mitochondrial proteins, provides evidence for extensive adaptive redesign of snake metabolic pathways. Additional evidence for molecular adaptation and gene family expansions and contractions is associated with major physiological and phenotypic adaptations in snakes; genes involved are related to cell cycle, development, lungs, eyes, heart, intestine, and skeletal structure, including GRB2-associated binding protein 1, SSH, WNT16, and bone morphogenetic protein 7. Finally, changes in repetitive DNA content, guanine-cytosine isochore structure, and nucleotide substitution rates indicate major shifts in the structure and evolution of snake genomes compared with other amniotes. Phenotypic and physiological novelty in snakes seems to be driven by system-wide coordination of protein adaptation, gene expression, and changes in the structure of the genome. PMID:24297902
The Burmese python genome reveals the molecular basis for extreme adaptation in snakes.
Castoe, Todd A; de Koning, A P Jason; Hall, Kathryn T; Card, Daren C; Schield, Drew R; Fujita, Matthew K; Ruggiero, Robert P; Degner, Jack F; Daza, Juan M; Gu, Wanjun; Reyes-Velasco, Jacobo; Shaney, Kyle J; Castoe, Jill M; Fox, Samuel E; Poole, Alex W; Polanco, Daniel; Dobry, Jason; Vandewege, Michael W; Li, Qing; Schott, Ryan K; Kapusta, Aurélie; Minx, Patrick; Feschotte, Cédric; Uetz, Peter; Ray, David A; Hoffmann, Federico G; Bogden, Robert; Smith, Eric N; Chang, Belinda S W; Vonk, Freek J; Casewell, Nicholas R; Henkel, Christiaan V; Richardson, Michael K; Mackessy, Stephen P; Bronikowski, Anne M; Bronikowsi, Anne M; Yandell, Mark; Warren, Wesley C; Secor, Stephen M; Pollock, David D
2013-12-17
Snakes possess many extreme morphological and physiological adaptations. Identification of the molecular basis of these traits can provide novel understanding for vertebrate biology and medicine. Here, we study snake biology using the genome sequence of the Burmese python (Python molurus bivittatus), a model of extreme physiological and metabolic adaptation. We compare the python and king cobra genomes along with genomic samples from other snakes and perform transcriptome analysis to gain insights into the extreme phenotypes of the python. We discovered rapid and massive transcriptional responses in multiple organ systems that occur on feeding and coordinate major changes in organ size and function. Intriguingly, the homologs of these genes in humans are associated with metabolism, development, and pathology. We also found that many snake metabolic genes have undergone positive selection, which together with the rapid evolution of mitochondrial proteins, provides evidence for extensive adaptive redesign of snake metabolic pathways. Additional evidence for molecular adaptation and gene family expansions and contractions is associated with major physiological and phenotypic adaptations in snakes; genes involved are related to cell cycle, development, lungs, eyes, heart, intestine, and skeletal structure, including GRB2-associated binding protein 1, SSH, WNT16, and bone morphogenetic protein 7. Finally, changes in repetitive DNA content, guanine-cytosine isochore structure, and nucleotide substitution rates indicate major shifts in the structure and evolution of snake genomes compared with other amniotes. Phenotypic and physiological novelty in snakes seems to be driven by system-wide coordination of protein adaptation, gene expression, and changes in the structure of the genome.
Cannon, Robert C; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.
Cannon, Robert C.; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R. Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties. PMID:25309419
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
photPARTY: Python Automated Square-Aperture Photometry
NASA Astrophysics Data System (ADS)
Symons, Teresa A.
As CCD's have drastically increased the amount of information recorded per frame, so too have they increased the time and effort needed to sift through the data. For observations of a single star, information from millions of pixels needs to be distilled into one number: the magnitude. Various computer systems have been used to streamline this process over the years. The CCDPhot photometer, in use at the Kitt Peak 0.9-m telescope in the 1990's, allowed for user settings and provided real time magnitudes during observation of single stars. It is this level of speed and convenience that inspired the development of the Python-based software analysis system photPARTY, which can quickly and efficiently produce magnitudes for a set of single- star or un-crowded field CCD frames. Seeking to remove the need for manual interaction after initial settings for a group of images, photPARTY automatically locates stars, subtracts the background, and performs square-aperture photometry. Rather than being a package of available functions, it is essentially a self-contained, one-click analysis system, with the capability to process several hundred frames in just a couple of minutes. Results of comparisons with present systems such as IRAF are presented.
Bionic models for identification of biological systems
NASA Astrophysics Data System (ADS)
Gerget, O. M.
2017-01-01
This article proposes a clinical decision support system that processes biomedical data. For this purpose a bionic model has been designed based on neural networks, genetic algorithms and immune systems. The developed system has been tested on data from pregnant women. The paper focuses on the approach to enable selection of control actions that can minimize the risk of adverse outcome. The control actions (hyperparameters of a new type) are further used as an additional input signal. Its values are defined by a hyperparameter optimization method. A software developed with Python is briefly described.
Cold-induced mortality of invasive Burmese pythons in south Florida
Mazzotti, Frank J.; Cherkiss, Michael S.; Hart, Kristen M.; Snow, Ray W.; Rochford, Michael R.; Dorcas, Michael E.; Reed, Robert N.
2011-01-01
A recent record cold spell in southern Florida (2–11 January 2010) provided an opportunity to evaluate responses of an established population of Burmese pythons (Python molurus bivittatus) to a prolonged period of unusually cold weather. We observed behavior, characterized thermal biology, determined fate of radio-telemetered (n = 10) and non-telemetered (n = 104) Burmese pythons, and analyzed habitat and environmental conditions experienced by pythons during and after a historic cold spell. Telemetered pythons had been implanted with radio-transmitters and temperature-recording data loggers prior to the cold snap. Only one of 10 telemetered pythons survived the cold snap, whereas 59 of 99 (60%) non-telemetered pythons for which we determined fate survived. Body temperatures of eight dead telemetered pythons fluctuated regularly prior to 9 January 2010, then declined substantially during the cold period (9–11 January) and exhibited no further evidence of active thermoregulation indicating they were likely dead. Unusually cold temperatures in January 2010 were clearly associated with mortality of Burmese pythons in the Everglades. Some radio-telemetered pythons appeared to exhibit maladaptive behavior during the cold spell, including attempting to bask instead of retreating to sheltered refugia. We discuss implications of our findings for persistence and spread of introduced Burmese pythons in the United States and for maximizing their rate of removal.
Tsuji, Yamato; Prayitno, Bambang; Suryobroto, Bambang
2016-04-01
We observed an encounter between a reticulated python (Python reticulatus) and a group of wild Javan lutungs (Trachypithecus auratus mauritius) at the Pangandaran Nature Reserve, West Java, Indonesia. A python (about 2 m in length) moved toward a group of lutungs in the trees. Upon seeing the python, an adult male and several adult female lutungs began to emit alarm calls. As the python approached, two adult and one sub-adult female jumped onto a branch near the python and began mobbing the python by shaking the branch. During the mobbing, other individuals in the group (including an adult lutung male) remained nearby but did not participate. The python then rolled into a ball-like shape and stopped moving, at which point the lutungs moved away. The total duration of the encounter was about 40 min, during which time the lutungs stopped feeding and grooming. Group cohesiveness during and after the encounter was greater than that before the encounter, indicating that lutungs adjust their daily activity in response to potential predation risk.
Marsh rabbit mortalities tie pythons to the precipitous decline of mammals in the Everglades.
McCleery, Robert A; Sovie, Adia; Reed, Robert N; Cunningham, Mark W; Hunter, Margaret E; Hart, Kristen M
2015-04-22
To address the ongoing debate over the impact of invasive species on native terrestrial wildlife, we conducted a large-scale experiment to test the hypothesis that invasive Burmese pythons (Python molurus bivittatus) were a cause of the precipitous decline of mammals in Everglades National Park (ENP). Evidence linking pythons to mammal declines has been indirect and there are reasons to question whether pythons, or any predator, could have caused the precipitous declines seen across a range of mammalian functional groups. Experimentally manipulating marsh rabbits, we found that pythons accounted for 77% of rabbit mortalities within 11 months of their translocation to ENP and that python predation appeared to preclude the persistence of rabbit populations in ENP. On control sites, outside of the park, no rabbits were killed by pythons and 71% of attributable marsh rabbit mortalities were classified as mammal predations. Burmese pythons pose a serious threat to the faunal communities and ecological functioning of the Greater Everglades Ecosystem, which will probably spread as python populations expand their range. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Marsh rabbit mortalities tie pythons to the precipitous decline of mammals in the Everglades
McCleery, Robert A.; Sovie, Adia; Reed, Robert N.; Cunningham, Mark W.; Hunter, Margaret E.; Hart, Kristen M.
2015-01-01
To address the ongoing debate over the impact of invasive species on native terrestrial wildlife, we conducted a large-scale experiment to test the hypothesis that invasive Burmese pythons (Python molurus bivittatus) were a cause of the precipitous decline of mammals in Everglades National Park (ENP). Evidence linking pythons to mammal declines has been indirect and there are reasons to question whether pythons, or any predator, could have caused the precipitous declines seen across a range of mammalian functional groups. Experimentally manipulating marsh rabbits, we found that pythons accounted for 77% of rabbit mortalities within 11 months of their translocation to ENP and that python predation appeared to preclude the persistence of rabbit populations in ENP. On control sites, outside of the park, no rabbits were killed by pythons and 71% of attributable marsh rabbit mortalities were classified as mammal predations. Burmese pythons pose a serious threat to the faunal communities and ecological functioning of the Greater Everglades Ecosystem, which will probably spread as python populations expand their range. PMID:25788598
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Optimizing python-based ROOT I/O with PyPy's tracing just-in-time compiler
NASA Astrophysics Data System (ADS)
Tlp Lavrijsen, Wim
2012-12-01
The Python programming language allows objects and classes to respond dynamically to the execution environment. Most of this, however, is made possible through language hooks which by definition can not be optimized and thus tend to be slow. The PyPy implementation of Python includes a tracing just in time compiler (JIT), which allows similar dynamic responses but at the interpreter-, rather than the application-level. Therefore, it is possible to fully remove the hooks, leaving only the dynamic response, in the optimization stage for hot loops, if the types of interest are opened up to the JIT. A general opening up of types to the JIT, based on reflection information, has already been developed (cppyy). The work described in this paper takes it one step further by customizing access to ROOT I/O to the JIT, allowing for fully automatic optimizations.
Update 0.2 to "pysimm: A python package for simulation of molecular systems"
NASA Astrophysics Data System (ADS)
Demidov, Alexander G.; Fortunato, Michael E.; Colina, Coray M.
2018-01-01
An update to the pysimm Python molecular simulation API is presented. A major part of the update is the implementation of a new interface with CASSANDRA - a modern, versatile Monte Carlo molecular simulation program. Several significant improvements in the LAMMPS communication module that allow better and more versatile simulation setup are reported as well. An example of an application implementing iterative CASSANDRA-LAMMPS interaction is illustrated.
NASA Technical Reports Server (NTRS)
Orcutt, John M.; Barbre, Robert E., Jr.; Brenton, James C.; Decker, Ryan K.
2017-01-01
Launch vehicle programs require vertically complete atmospheric profiles. Many systems at the ER to make the necessary measurements, but all have different EVR, vertical coverage, and temporal coverage. MSFC Natural Environments Branch developed a tool to create a vertically complete profile from multiple inputs using Python. Forward work: Finish Formal Testing Acceptance Testing, End-to-End Testing. Formal Release
Wyrm: A Brain-Computer Interface Toolbox in Python.
Venthur, Bastian; Dähne, Sven; Höhne, Johannes; Heller, Hendrik; Blankertz, Benjamin
2015-10-01
In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm's software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.
NASA Astrophysics Data System (ADS)
O'Kuinghttons, Ryan; Koziol, Benjamin; Oehmke, Robert; DeLuca, Cecelia; Theurich, Gerhard; Li, Peggy; Jacob, Joseph
2016-04-01
The Earth System Modeling Framework (ESMF) Python interface (ESMPy) supports analysis and visualization in Earth system modeling codes by providing access to a variety of tools for data manipulation. ESMPy started as a Python interface to the ESMF grid remapping package, which provides mature and robust high-performance and scalable grid remapping between 2D and 3D logically rectangular and unstructured grids and sets of unconnected data. ESMPy now also interfaces with OpenClimateGIS (OCGIS), a package that performs subsetting, reformatting, and computational operations on climate datasets. ESMPy exposes a subset of ESMF grid remapping utilities. This includes bilinear, finite element patch recovery, first-order conservative, and nearest neighbor grid remapping methods. There are also options to ignore unmapped destination points, mask points on source and destination grids, and provide grid structure in the polar regions. Grid remapping on the sphere takes place in 3D Cartesian space, so the pole problem is not an issue as it can be with other grid remapping software. Remapping can be done between any combination of 2D and 3D logically rectangular and unstructured grids with overlapping domains. Grid pairs where one side of the regridding is represented by an appropriate set of unconnected data points, as is commonly found with observational data streams, is also supported. There is a developing interoperability layer between ESMPy and OpenClimateGIS (OCGIS). OCGIS is a pure Python, open source package designed for geospatial manipulation, subsetting, and computation on climate datasets stored in local NetCDF files or accessible remotely via the OPeNDAP protocol. Interfacing with OCGIS has brought GIS-like functionality to ESMPy (i.e. subsetting, coordinate transformations) as well as additional file output formats (i.e. CSV, ESRI Shapefile). ESMPy is distinguished by its strong emphasis on open source, community governance, and distributed development. The user base has grown quickly, and the package is integrating with several other software tools and frameworks. These include the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), Iris, PyFerret, cfpython, and the Community Surface Dynamics Modeling System (CSDMS). ESMPy minimum requirements include Python 2.6, Numpy 1.6.1 and an ESMF installation. Optional dependencies include NetCDF and OCGIS-related dependencies: GDAL, Shapely, and Fiona. ESMPy is regression tested nightly, and supported on Darwin, Linux and Cray systems with the GNU compiler suite and MPI communications. OCGIS is supported on Linux, and also undergoes nightly regression testing. Both packages are installable from Anaconda channels. Upcoming development plans for ESMPy involve development of a higher order conservative grid remapping method. Future OCGIS development will focus on mesh and location stream interoperability and streamlined access to ESMPy's MPI implementation.
NASA Astrophysics Data System (ADS)
Machalek, P.; Kim, S. M.; Berry, R. D.; Liang, A.; Small, T.; Brevdo, E.; Kuznetsova, A.
2012-12-01
We describe how the Climate Corporation uses Python and Clojure, a language impleneted on top of Java, to generate climatological forecasts for precipitation based on the Advanced Hydrologic Prediction Service (AHPS) radar based daily precipitation measurements. A 2-year-long forecasts is generated on each of the ~650,000 CONUS land based 4-km AHPS grids by constructing 10,000 ensembles sampled from a 30-year reconstructed AHPS history for each grid. The spatial and temporal correlations between neighboring AHPS grids and the sampling of the analogues are handled by Python. The parallelization for all the 650,000 CONUS stations is further achieved by utilizing the MAP-REDUCE framework (http://code.google.com/edu/parallel/mapreduce-tutorial.html). Each full scale computational run requires hundreds of nodes with up to 8 processors each on the Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) distributed computing service resulting in 3 terabyte datasets. We further describe how we have productionalized a monthly run of the simulations process at full scale of the 4km AHPS grids and how the resultant terabyte sized datasets are handled.
Hoon-Hanks, Laura L; Layton, Marylee L; Ossiboff, Robert J; Parker, John S L; Dubovi, Edward J; Stenglein, Mark D
2018-04-01
Circumstantial evidence has linked a new group of nidoviruses with respiratory disease in pythons, lizards, and cattle. We conducted experimental infections in ball pythons (Python regius) to test the hypothesis that ball python nidovirus (BPNV) infection results in respiratory disease. Three ball pythons were inoculated orally and intratracheally with cell culture isolated BPNV and two were sham inoculated. Antemortem choanal, oroesophageal, and cloacal swabs and postmortem tissues of infected snakes were positive for viral RNA, protein, and infectious virus by qRT-PCR, immunohistochemistry, western blot and virus isolation. Clinical signs included oral mucosal reddening, abundant mucus secretions, open-mouthed breathing, and anorexia. Histologic lesions included chronic-active mucinous rhinitis, stomatitis, tracheitis, esophagitis and proliferative interstitial pneumonia. Control snakes remained negative and free of clinical signs throughout the experiment. Our findings establish a causal relationship between nidovirus infection and respiratory disease in ball pythons and shed light on disease progression and transmission. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Ultrasound imaging of the anterior section of the eye of five different snake species.
Lauridsen, Henrik; Da Silva, Mari-Ann O; Hansen, Kasper; Jensen, Heidi M; Warming, Mads; Wang, Tobias; Pedersen, Michael
2014-12-30
Nineteen clinically normal snakes: six ball pythons (Python regius), six Burmese pythons (Python bivittatus), one Children's python (Antaresia childreni), four Amazon tree boas (Corallus hortulanus), and two Malagasy ground boas (Acrantophis madagascariensis) were subjected to ultrasound imaging with 21 MHz (ball python) and 50 MHz (ball python, Burmese python, Children's python, Amazon tree boa, Malagasy ground boa) transducers in order to measure the different structures of the anterior segment in clinically normal snake eyes with the aim to review baseline values for clinically important ophthalmic structures. The ultrasonographic measurements included horizontal spectacle diameter, spectacle thickness, depth of sub-spectacular space and corneal thickness. For comparative purposes, a formalin-fixed head of a Burmese python was subjected to micro computed tomography. In all snakes, the spectacle was thinner than the cornea. There was significant difference in spectacle diameter, and spectacle and corneal thickness between the Amazon tree boa and the Burmese and ball pythons. There was no difference in the depth of the sub-spectacular space. The results obtained in the Burmese python with the 50 MHz transducer were similar to the results obtained with micro computed tomography. Images acquired with the 21 MHz transducer included artifacts which may be misinterpreted as ocular structures. Our measurements of the structures in the anterior segment of the eye can serve as orientative values for snakes examined for ocular diseases. In addition, we demonstrated that using a high frequency transducer minimizes the risk of misinterpreting artifacts as ocular structures.
Wong, Wing Chung; Kim, Dewey; Carter, Hannah; Diekhans, Mark; Ryan, Michael C; Karchin, Rachel
2011-08-01
Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM.
EasyModeller: A graphical interface to MODELLER
2010-01-01
Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system. PMID:20712861
astroplan: An Open Source Observation Planning Package in Python
NASA Astrophysics Data System (ADS)
Morris, Brett M.; Tollerud, Erik; Sipőcz, Brigitta; Deil, Christoph; Douglas, Stephanie T.; Berlanga Medina, Jazmin; Vyhmeister, Karl; Smith, Toby R.; Littlefair, Stuart; Price-Whelan, Adrian M.; Gee, Wilfred T.; Jeschke, Eric
2018-03-01
We present astroplan—an open source, open development, Astropy affiliated package for ground-based observation planning and scheduling in Python. astroplan is designed to provide efficient access to common observational quantities such as celestial rise, set, and meridian transit times and simple transformations from sky coordinates to altitude-azimuth coordinates without requiring a detailed understanding of astropy’s implementation of coordinate systems. astroplan provides convenience functions to generate common observational plots such as airmass and parallactic angle as a function of time, along with basic sky (finder) charts. Users can determine whether or not a target is observable given a variety of observing constraints, such as airmass limits, time ranges, Moon illumination/separation ranges, and more. A selection of observation schedulers are included that divide observing time among a list of targets, given observing constraints on those targets. Contributions to the source code from the community are welcome.
Morphological respiratory diffusion capacity of the lungs of ball pythons (Python regius).
Starck, J Matthias; Aupperle, Heike; Kiefer, Ingmar; Weimer, Isabel; Krautwald-Junghanns, Maria-Elisabeth; Pees, Michael
2012-08-01
This study aims at a functional and morphological characterization of the lung of a boid snake. In particular, we were interested to see if the python's lungs are designed with excess capacity as compared to resting and working oxygen demands. Therefore, the morphological respiratory diffusion capacity of ball pythons (Python regius) was examined following a stereological, hierarchically nested approach. The volume of the respiratory exchange tissue was determined using computed tomography. Tissue compartments were quantified using stereological methods on light microscopic images. The tissue diffusion barrier for oxygen transport was characterized and measured using transmission electron micrographs. We found a significant negative correlation between body mass and the volume of respiratory tissue; the lungs of larger snakes had relatively less respiratory tissue. Therefore, mass-specific respiratory tissue was calculated to exclude effects of body mass. The volume of the lung that contains parenchyma was 11.9±5.0mm(3)g(-1). The volume fraction, i.e., the actual pulmonary exchange tissue per lung parenchyma, was 63.22±7.3%; the total respiratory surface was, on average, 0.214±0.129m(2); it was significantly negatively correlated to body mass, with larger snakes having proportionally smaller respiratory surfaces. For the air-blood barrier, a harmonic mean of 0.78±0.05μm was found, with the epithelial layer representing the thickest part of the barrier. Based on these findings, a median diffusion capacity of the tissue barrier ( [Formula: see text] ) of 0.69±0.38ml O(2)min(-1)mmHg(-1) was calculated. Based on published values for blood oxygen concentration, a total oxygen uptake capacity of 61.16mlO(2)min(-1)kg(-1) can be assumed. This value exceeds the maximum demand for oxygen in ball pythons by a factor of 12. We conclude that healthy individuals of P. regius possess a considerable spare capacity for tissue oxygen exchange. Copyright © 2012 Elsevier GmbH. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-05-13
STONIX is a program for configuring UNIX and Linux computer operating systems. It applies configurations based on the guidance from publicly accessible resources such as: NSA Guides, DISA STIGs, the Center for Internet Security (CIS), USGCB and vendor security documentation. STONIX is written in the Python programming language using the QT4 and PyQT4 libraries to provide a GUI. The code is designed to be easily extensible and customizable.
"Remember to Hand out Medals": Peer Rating and Expertise in a Question-and-Answer Study Group
ERIC Educational Resources Information Center
Ponti, Marisa
2015-01-01
This article reports on an exploratory study of giving medals as part of a peer rating system in a question-and-answer (Q&A) study group on Python, a programming language. There are no professional teachers tutoring learners. The study aimed to understand whether and how medals, awarded to responses in a peer-based learning environment, can…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piot, P.; Halavanau, A.
This paper discusses the implementation of a python- based high-level interface to the Fermilab acnet control system. The interface has been successfully employed during the commissioning of the Fermilab Accelerator Science & Technology (FAST) facility. Specifically, we present examples of applications at FAST which include the interfacing of the elegant program to assist lattice matching, an automated emittance measurement via the quadrupole-scan method and tranverse transport matrix measurement of a superconducting RF cavity.
NASA Astrophysics Data System (ADS)
Erickson, T.
2016-12-01
Deriving actionable information from Earth observation data obtained from sensors or models can be quite complicated, and sharing those insights with others in a form that they can understand, reproduce, and improve upon is equally difficult. Journal articles, even if digital, commonly present just a summary of an analysis that cannot be understood in depth or reproduced without major effort on the part of the reader. Here we show a method of improving scientific literacy by pairing a recently developed scientific presentation technology (Jupyter Notebooks) with a petabyte-scale platform for accessing and analyzing Earth observation and model data (Google Earth Engine). Jupyter Notebooks are interactive web documents that mix live code with annotations such as rich-text markup, equations, images, videos, hyperlinks and dynamic output. Notebooks were first introduced as part of the IPython project in 2011, and have since gained wide acceptance in the scientific programming community, initially among Python programmers but later by a wide range of scientific programming languages. While Jupyter Notebooks have been widely adopted for general data analysis, data visualization, and machine learning, to date there have been relatively few examples of using Jupyter Notebooks to analyze geospatial datasets. Google Earth Engine is cloud-based platform for analyzing geospatial data, such as satellite remote sensing imagery and/or Earth system model output. Through its Python API, Earth Engine makes petabytes of Earth observation data accessible, and provides hundreds of algorithmic building blocks that can be chained together to produce high-level algorithms and outputs in real-time. We anticipate that this technology pairing will facilitate a better way of creating, documenting, and sharing complex analyses that derive information on our Earth that can be used to promote broader understanding of the complex issues that it faces. http://jupyter.orghttps://earthengine.google.com
An object oriented Python interface for atomistic simulations
NASA Astrophysics Data System (ADS)
Hynninen, T.; Himanen, L.; Parkkinen, V.; Musso, T.; Corander, J.; Foster, A. S.
2016-01-01
Programmable simulation environments allow one to monitor and control calculations efficiently and automatically before, during, and after runtime. Environments directly accessible in a programming environment can be interfaced with powerful external analysis tools and extensions to enhance the functionality of the core program, and by incorporating a flexible object based structure, the environments make building and analysing computational setups intuitive. In this work, we present a classical atomistic force field with an interface written in Python language. The program is an extension for an existing object based atomistic simulation environment.
Unilateral microphthalmia or anophthalmia in eight pythons (Pythonidae).
Da Silva, Mari-Ann O; Bertelsen, Mads F; Wang, Tobias; Pedersen, Michael; Lauridsen, Henrik; Heegaard, Steffen
2015-01-01
To provide morphological descriptions of microphthalmia or anophthalmia in eight pythons using microcomputerized tomography (μCT), magnetic resonance imaging (MRI), and histopathology. Seven Burmese pythons (Python bivittatus) and one ball python (P. regius) with clinically normal right eyes and an abnormal or missing left eye. At the time of euthanasia, four of the eight snakes underwent necropsy. Hereafter, the heads of two Burmese pythons and one ball python were examined using μCT, and another Burmese python was subjected to MRI. Following these procedures, the heads of these four pythons along with the heads of an additional three Burmese pythons were prepared for histology. All eight snakes had left ocular openings seen as dermal invaginations between 0.2 and 2.0 mm in diameter. They also had varying degrees of malformations of the orbital bones and a limited presence of nervous, glandular, and muscle tissue in the posterior orbit. Two individuals had small but identifiable eyes. Furthermore, remnants of the pigmented embryonic framework of the hyaloid vessels were found in the anophthalmic snakes. Necropsies revealed no other macroscopic anomalies. Eight pythons with unilateral left-sided microphthalmia or anophthalmia had one normal eye and a left orbit with malformed or incompletely developed ocular structures along with remnants of fetal structures. These cases lend further information to a condition that is often seen in snakes, but infrequently described. © 2014 American College of Veterinary Ophthalmologists.
The Transition from VMS to Unix Operations for STScI's Science Planning and Scheduling Team
NASA Astrophysics Data System (ADS)
Adler, D. S.; Taylor, D. K.
The Science Planning and Scheduling Team of the Space Telescope Science Institute currently uses the VMS operating system. SPST began a transition to Unix-based operations in the summer of 1999. The main tasks for SPST to address in the Unix transition are: (1) converting the current SPST operational tools from DCL to Python; (2) converting our database report scripts from SQL; (3) adopting a Unix-based code management system; and (4) training the SPST staff. The goal is to fully transition the team to Unix operations by the end of 2001.
Turning a remotely controllable observatory into a fully autonomous system
NASA Astrophysics Data System (ADS)
Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael
2014-08-01
We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.
PyParse: a semiautomated system for scoring spoken recall data.
Solway, Alec; Geller, Aaron S; Sederberg, Per B; Kahana, Michael J
2010-02-01
Studies of human memory often generate data on the sequence and timing of recalled items, but scoring such data using conventional methods is difficult or impossible. We describe a Python-based semiautomated system that greatly simplifies this task. This software, called PyParse, can easily be used in conjunction with many common experiment authoring systems. Scored data is output in a simple ASCII format and can be accessed with the programming language of choice, allowing for the identification of features such as correct responses, prior-list intrusions, extra-list intrusions, and repetitions.
ssbio: a Python framework for structural systems biology.
Mih, Nathan; Brunk, Elizabeth; Chen, Ke; Catoiu, Edward; Sastry, Anand; Kavvas, Erol; Monk, Jonathan M; Zhang, Zhen; Palsson, Bernhard O
2018-06-15
Working with protein structures at the genome-scale has been challenging in a variety of ways. Here, we present ssbio, a Python package that provides a framework to easily work with structural information in the context of genome-scale network reconstructions, which can contain thousands of individual proteins. The ssbio package provides an automated pipeline to construct high quality genome-scale models with protein structures (GEM-PROs), wrappers to popular third-party programs to compute associated protein properties, and methods to visualize and annotate structures directly in Jupyter notebooks, thus lowering the barrier of linking 3D structural data with established systems workflows. ssbio is implemented in Python and available to download under the MIT license at http://github.com/SBRG/ssbio. Documentation and Jupyter notebook tutorials are available at http://ssbio.readthedocs.io/en/latest/. Interactive notebooks can be launched using Binder at https://mybinder.org/v2/gh/SBRG/ssbio/master?filepath=Binder.ipynb. Supplementary data are available at Bioinformatics online.
Technical integration of hippocampus, Basal Ganglia and physical models for spatial navigation.
Fox, Charles; Humphries, Mark; Mitchinson, Ben; Kiss, Tamas; Somogyvari, Zoltan; Prescott, Tony
2009-01-01
Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings.
ObsPy: A Python Toolbox for Seismology - Recent Developments and Applications
NASA Astrophysics Data System (ADS)
Megies, T.; Krischer, L.; Barsch, R.; Sales de Andrade, E.; Beyreuther, M.
2014-12-01
ObsPy (http://www.obspy.org) is a community-driven, open-source project dedicated to building a bridge for seismology into the scientific Python ecosystem. It offersa) read and write support for essentially all commonly used waveform, station, and event metadata file formats with a unified interface,b) a comprehensive signal processing toolbox tuned to the needs of seismologists,c) integrated access to all large data centers, web services and databases, andd) convenient wrappers to legacy codes like libtau and evalresp.Python, currently the most popular language for teaching introductory computer science courses at top-ranked U.S. departments, is a full-blown programming language with the flexibility of an interactive scripting language. Its extensive standard library and large variety of freely available high quality scientific modules cover most needs in developing scientific processing workflows. Together with packages like NumPy, SciPy, Matplotlib, IPython, Pandas, lxml, and PyQt, ObsPy enables the construction of complete workflows in Python. These vary from reading locally stored data or requesting data from one or more different data centers through to signal analysis and data processing and on to visualizations in GUI and web applications, output of modified/derived data and the creation of publication-quality figures.ObsPy enjoys a large world-wide rate of adoption in the community. Applications successfully using it include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. All functionality is extensively documented and the ObsPy tutorial and gallery give a good impression of the wide range of possible use cases.We will present the basic features of ObsPy, new developments and applications, and a roadmap for the near future and discuss the sustainability of our open-source development model.
Landlab: an Open-Source Python Library for Modeling Earth Surface Dynamics
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Adams, J. M.; Hobley, D. E. J.; Hutton, E.; Nudurupati, S. S.; Istanbulluoglu, E.; Tucker, G. E.
2016-12-01
Landlab is an open-source Python modeling library that enables users to easily build unique models to explore earth surface dynamics. The Landlab library provides a number of tools and functionalities that are common to many earth surface models, thus eliminating the need for a user to recode fundamental model elements each time she explores a new problem. For example, Landlab provides a gridding engine so that a user can build a uniform or nonuniform grid in one line of code. The library has tools for setting boundary conditions, adding data to a grid, and performing basic operations on the data, such as calculating gradients and curvature. The library also includes a number of process components, which are numerical implementations of physical processes. To create a model, a user creates a grid and couples together process components that act on grid variables. The current library has components for modeling a diverse range of processes, from overland flow generation to bedrock river incision, from soil wetting and drying to vegetation growth, succession and death. The code is freely available for download (https://github.com/landlab/landlab) or can be installed as a Python package. Landlab models can also be built and run on Hydroshare (www.hydroshare.org), an online collaborative environment for sharing hydrologic data, models, and code. Tutorials illustrating a wide range of Landlab capabilities such as building a grid, setting boundary conditions, reading in data, plotting, using components and building models are also available (https://github.com/landlab/tutorials). The code is also comprehensively documented both online and natively in Python. In this presentation, we illustrate the diverse capabilities of Landlab. We highlight existing functionality by illustrating outcomes from a range of models built with Landlab - including applications that explore landscape evolution and ecohydrology. Finally, we describe the range of resources available for new users.
Pythons in Burma: Short-tailed python (Reptilia: Squamata)
Zug, George R.; Gotte, Steve W.; Jacobs, Jeremy F.
2011-01-01
Short-tailed pythons, Python curtus species group, occur predominantly in the Malayan Peninsula, Sumatra, and Borneo. The discovery of an adult female in Mon State, Myanmar, led to a review of the distribution of all group members (spot-mapping of all localities of confirmed occurrence) and an examination of morphological variation in P. brongersmai. The resulting maps demonstrate a limited occurrence of these pythons within peninsular Malaya, Sumatra, and Borneo with broad absences in these regions. Our small samples limit the recognition of regional differentiation in the morphology of P. brongersmai populations; however, the presence of unique traits in the Myanmar python and its strong allopatry indicate that it is a unique genetic lineage, and it is described as Python kyaiktiyo new species.
Image Classification Workflow Using Machine Learning Methods
NASA Astrophysics Data System (ADS)
Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.
2016-12-01
Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SmartImport.py is a Python source-code file that implements a replacement for the standard Python module importer. The code is derived from knee.py, a file in the standard Python diestribution , and adds functionality to improve the performance of Python module imports in massively parallel contexts.
BioC implementations in Go, Perl, Python and Ruby.
Liu, Wanli; Islamaj Doğan, Rezarta; Kwon, Dongseop; Marques, Hernani; Rinaldi, Fabio; Wilbur, W John; Comeau, Donald C
2014-01-01
As part of a communitywide effort for evaluating text mining and information extraction systems applied to the biomedical domain, BioC is focused on the goal of interoperability, currently a major barrier to wide-scale adoption of text mining tools. BioC is a simple XML format, specified by DTD, for exchanging data for biomedical natural language processing. With initial implementations in C++ and Java, BioC provides libraries of code for reading and writing BioC text documents and annotations. We extend BioC to Perl, Python, Go and Ruby. We used SWIG to extend the C++ implementation for Perl and one Python implementation. A second Python implementation and the Ruby implementation use native data structures and libraries. BioC is also implemented in the Google language Go. BioC modules are functional in all of these languages, which can facilitate text mining tasks. BioC implementations are freely available through the BioC site: http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net/ Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Merticariu, Vlad; Misev, Dimitar; Baumann, Peter
2017-04-01
While python has developed into the lingua franca in Data Science there is often a paradigm break when accessing specialized tools. In particular for one of the core data categories in science and engineering, massive multi-dimensional arrays, out-of-memory solutions typically employ their own, different models. We discuss this situation on the example of the scalable open-source array engine, rasdaman ("raster data manager") which offers access to and processing of Petascale multi-dimensional arrays through an SQL-style array query language, rasql. Such queries are executed in the server on a storage engine utilizing adaptive array partitioning and based on a processing engine implementing a "tile streaming" paradigm to allow processing of arrays massively larger than server RAM. The rasdaman QL has acted as blueprint for forthcoming ISO Array SQL and the Open Geospatial Consortium (OGC) geo analytics language, Web Coverage Processing Service, adopted in 2008. Not surprisingly, rasdaman is OGC and INSPIRE Reference Implementation for their "Big Earth Data" standards suite. Recently, rasdaman has been augmented with a python interface which allows to transparently interact with the database (credits go to Siddharth Shukla's Master Thesis at Jacobs University). Programmers do not need to know the rasdaman query language, as the operators are silently transformed, through lazy evaluation, into queries. Arrays delivered are likewise automatically transformed into their python representation. In the talk, the rasdaman concept will be illustrated with the help of large-scale real-life examples of operational satellite image and weather data services, and sample python code.
NASA Astrophysics Data System (ADS)
Trautt, Zachary T.; Tavazza, Francesca; Becker, Chandler A.
2015-10-01
The Materials Genome Initiative seeks to significantly decrease the cost and time of development and integration of new materials. Within the domain of atomistic simulations, several roadblocks stand in the way of reaching this goal. While the NIST Interatomic Potentials Repository hosts numerous interatomic potentials (force fields), researchers cannot immediately determine the best choice(s) for their use case. Researchers developing new potentials, specifically those in restricted environments, lack a comprehensive portfolio of efficient tools capable of calculating and archiving the properties of their potentials. This paper elucidates one solution to these problems, which uses Python-based scripts that are suitable for rapid property evaluation and human knowledge transfer. Calculation results are visible on the repository website, which reduces the time required to select an interatomic potential for a specific use case. Furthermore, property evaluation scripts are being integrated with modern platforms to improve discoverability and access of materials property data. To demonstrate these scripts and features, we will discuss the automation of stacking fault energy calculations and their application to additional elements. While the calculation methodology was developed previously, we are using it here as a case study in simulation automation and property calculations. We demonstrate how the use of Python scripts allows for rapid calculation in a more easily managed way where the calculations can be modified, and the results presented in user-friendly and concise ways. Additionally, the methods can be incorporated into other efforts, such as openKIM.
Cold induced mortality of the Burmese Python: An explanation via stochastic analysis
NASA Astrophysics Data System (ADS)
Quansah, Emmanuel; Parshad, Rana D.; Mondal, Sumona
2017-02-01
The Burmese python (Python bivitatus) is an invasive species, wreaking havoc on indigenous species in the Florida everglades. Data suggests an exponential growth in their population from 1995 to 2009, with a sharp decline however in 2010-2012 (Dorcas et al., 2012). In Mazzotti et al. (2011) an explanation is provided, citing the unusually cold winter that year, as the primary reason for this decline. We provide a first mathematical model, in the form of a system of stochastic differential equations, that supports the explanation in Mazzotti et al. (2011), by accurately matching the field data presented in Dorcas et al. (2012). More generally, our model provides a tool to predict the population dynamics of rapidly growing alien species, in the advent of climate change.
Detection of nidoviruses in live pythons and boas.
Marschang, Rachel E; Kolesnik, Ekaterina
2017-02-09
Nidoviruses have recently been described as a putative cause of severe respiratory disease in pythons in the USA and Europe. The objective of this study was to establish the use of a conventional PCR for the detection of nidoviruses in samples from live animals and to extend the list of susceptible species. A PCR targeting a portion of ORF1a of python nidoviruses was used to detect nidoviruses in diagnostic samples from live boas and pythons. A total of 95 pythons, 84 boas and 22 snakes of unknown species were included in the study. Samples tested included oral swabs and whole blood. Nidoviruses were detected in 27.4% of the pythons and 2.4% of the boas tested. They were most commonly detected in ball pythons (Python [P.] regius) and Indian rock pythons (P. molurus), but were also detected for the first time in other python species, including Morelia spp. and Boa constrictor. Oral swabs were most commonly tested positive. The PCR described here can be used for the detection of nidoviruses in oral swabs from live snakes. These viruses appear to be relatively common among snakes in captivity in Europe and screening for these viruses should be considered in the clinical work-up. Nidoviruses are believed to be an important cause of respiratory disease in pythons, but can also infect boas. Detection of these viruses in live animals is now possible and can be of interest both in diseased animals as well as in quarantine situations.
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
2018-01-18
processing. Specifically, the method described herein uses wgrib2 commands along with a Python script or program to produce tabular text files that in...It makes use of software that is readily available and can be implemented on many computer systems combined with relatively modest additional...example), extracts appropriate information, and lists the extracted information in a readable tabular form. The Python script used here is described in
Python scripting in the nengo simulator.
Stewart, Terrence C; Tripp, Bryan; Eliasmith, Chris
2009-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.
Python Scripting in the Nengo Simulator
Stewart, Terrence C.; Tripp, Bryan; Eliasmith, Chris
2008-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models. PMID:19352442
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
A Multi Agent System for Flow-Based Intrusion Detection
2013-03-01
Student t-test, as it is less likely to spuriously indicate significance because of the presence of outliers [128]. We use the MATLAB ranksum function [77...effectiveness of self-organization and “ entangled hierarchies” for accomplishing scenario objectives. One of the interesting features of SOMAS is the ability...cross-validation and automatic model selection. It has interfaces for Java, Python, R, Splus, MATLAB , Perl, Ruby, and LabVIEW. Kernels: linear
Wilber 3: A Python-Django Web Application For Acquiring Large-scale Event-oriented Seismic Data
NASA Astrophysics Data System (ADS)
Newman, R. L.; Clark, A.; Trabant, C. M.; Karstens, R.; Hutko, A. R.; Casey, R. E.; Ahern, T. K.
2013-12-01
Since 2001, the IRIS Data Management Center (DMC) WILBER II system has provided a convenient web-based interface for locating seismic data related to a particular event, and requesting a subset of that data for download. Since its launch, both the scale of available data and the technology of web-based applications have developed significantly. Wilber 3 is a ground-up redesign that leverages a number of public and open-source projects to provide an event-oriented data request interface with a high level of interactivity and scalability for multiple data types. Wilber 3 uses the IRIS/Federation of Digital Seismic Networks (FDSN) web services for event data, metadata, and time-series data. Combining a carefully optimized Google Map with the highly scalable SlickGrid data API, the Wilber 3 client-side interface can load tens of thousands of events or networks/stations in a single request, and provide instantly responsive browsing, sorting, and filtering of event and meta data in the web browser, without further reliance on the data service. The server-side of Wilber 3 is a Python-Django application, one of over a dozen developed in the last year at IRIS, whose common framework, components, and administrative overhead represent a massive savings in developer resources. Requests for assembled datasets, which may include thousands of data channels and gigabytes of data, are queued and executed using the Celery distributed Python task scheduler, giving Wilber 3 the ability to operate in parallel across a large number of nodes.
The Virtual Brain: a simulator of primate brain network dynamics.
Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2013-01-01
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.
The Virtual Brain: a simulator of primate brain network dynamics
Sanz Leon, Paula; Knock, Stuart A.; Woodman, M. Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2013-01-01
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications. PMID:23781198
Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework
NASA Astrophysics Data System (ADS)
Gannon, C.
2017-12-01
As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.
OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis
NASA Astrophysics Data System (ADS)
Grohmann, C. H.; Campanha, G. A.
2010-12-01
Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of each dataset (poles, great circles etc). The GUI shows the opened files in a tree structure, similar to “layers” of many illustration software, where the vertical order of the files in the tree reflects the drawing order of the selected elements. At this stage, the software performs plotting operations of poles to planes, lineations, great circles, density contours and rose diagrams. A set of statistics is calculated for each file and its eigenvalues and eigenvectors are used to suggest if the data is clustered about a mean value or distributed along a girdle. Modified Flinn, Triangular and histograms plots are also available. Next step of development will focus on tools as merging and rotation of datasets, possibility to save 'projects' and paleostress analysis. In its current state, OpenStereo requires Python, wxPython, Numpy and Matplotlib installed in the system. We recommend installing PythonXY or the Enthought Python Distribution on MS-Windows and MacOS machines, since all dependencies are provided. Most Linux distributions provide an easy way to install all dependencies through software repositories. OpenStereo is released under the GNU General Public License. Programmers willing to contribute are encouraged to contact the authors directly. FAPESP Grant #09/17675-5
Xray: N-dimensional, labeled arrays for analyzing physical datasets in Python
NASA Astrophysics Data System (ADS)
Hoyer, S.
2015-12-01
Efficient analysis of geophysical datasets requires tools that both preserve and utilize metadata, and that transparently scale to process large datas. Xray is such a tool, in the form of an open source Python library for analyzing the labeled, multi-dimensional array (tensor) datasets that are ubiquitous in the Earth sciences. Xray's approach pairs Python data structures based on the data model of the netCDF file format with the proven design and user interface of pandas, the popular Python data analysis library for labeled tabular data. On top of the NumPy array, xray adds labeled dimensions (e.g., "time") and coordinate values (e.g., "2015-04-10"), which it uses to enable a host of operations powered by these labels: selection, aggregation, alignment, broadcasting, split-apply-combine, interoperability with pandas and serialization to netCDF/HDF5. Many of these operations are enabled by xray's tight integration with pandas. Finally, to allow for easy parallelism and to enable its labeled data operations to scale to datasets that does not fit into memory, xray integrates with the parallel processing library dask.
Tellurium notebooks-An environment for reproducible dynamical modeling in systems biology.
Medley, J Kyle; Choi, Kiri; König, Matthias; Smith, Lucian; Gu, Stanley; Hellerstein, Joseph; Sealfon, Stuart C; Sauro, Herbert M
2018-06-01
The considerable difficulty encountered in reproducing the results of published dynamical models limits validation, exploration and reuse of this increasingly large biomedical research resource. To address this problem, we have developed Tellurium Notebook, a software system for model authoring, simulation, and teaching that facilitates building reproducible dynamical models and reusing models by 1) providing a notebook environment which allows models, Python code, and narrative to be intermixed, 2) supporting the COMBINE archive format during model development for capturing model information in an exchangeable format and 3) enabling users to easily simulate and edit public COMBINE-compliant models from public repositories to facilitate studying model dynamics, variants and test cases. Tellurium Notebook, a Python-based Jupyter-like environment, is designed to seamlessly inter-operate with these community standards by automating conversion between COMBINE standards formulations and corresponding in-line, human-readable representations. Thus, Tellurium brings to systems biology the strategy used by other literate notebook systems such as Mathematica. These capabilities allow users to edit every aspect of the standards-compliant models and simulations, run the simulations in-line, and re-export to standard formats. We provide several use cases illustrating the advantages of our approach and how it allows development and reuse of models without requiring technical knowledge of standards. Adoption of Tellurium should accelerate model development, reproducibility and reuse.
Efficient population-scale variant analysis and prioritization with VAPr.
Birmingham, Amanda; Mark, Adam M; Mazzaferro, Carlo; Xu, Guorong; Fisch, Kathleen M
2018-04-06
With the growing availability of population-scale whole-exome and whole-genome sequencing, demand for reproducible, scalable variant analysis has spread within genomic research communities. To address this need, we introduce the Python package VAPr (Variant Analysis and Prioritization). VAPr leverages existing annotation tools ANNOVAR and MyVariant.info with MongoDB-based flexible storage and filtering functionality. It offers biologists and bioinformatics generalists easy-to-use and scalable analysis and prioritization of genomic variants from large cohort studies. VAPr is developed in Python and is available for free use and extension under the MIT License. An install package is available on PyPi at https://pypi.python.org/pypi/VAPr, while source code and extensive documentation are on GitHub at https://github.com/ucsd-ccbb/VAPr. kfisch@ucsd.edu.
Expyriment: a Python library for cognitive and neuroscientific experiments.
Krause, Florian; Lindemann, Oliver
2014-06-01
Expyriment is an open-source and platform-independent lightweight Python library for designing and conducting timing-critical behavioral and neuroimaging experiments. The major goal is to provide a well-structured Python library for script-based experiment development, with a high priority being the readability of the resulting program code. Expyriment has been tested extensively under Linux and Windows and is an all-in-one solution, as it handles stimulus presentation, the recording of input/output events, communication with other devices, and the collection and preprocessing of data. Furthermore, it offers a hierarchical design structure, which allows for an intuitive transition from the experimental design to a running program. It is therefore also suited for students, as well as for experimental psychologists and neuroscientists with little programming experience.
Xarray: multi-dimensional data analysis in Python
NASA Astrophysics Data System (ADS)
Hoyer, Stephan; Hamman, Joe; Maussion, Fabien
2017-04-01
xarray (http://xarray.pydata.org) is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays, which are the bread and butter of modern geoscientific data analysis. Key features of the package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib, Cartopy), out-of-core computation on datasets that don't fit into memory, a wide range of input/output options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. In this contribution we will present the key features of the library and demonstrate its great potential for a wide range of applications, from (big-)data processing on super computers to data exploration in front of a classroom.
Ciavaglia, Sherryn; Linacre, Adrian
2018-05-01
Reptile species, and in particular snakes, are protected by national and international agreements yet are commonly handled illegally. To aid in the enforcement of such legislation, we report on the development of three 11-plex assays from the genome of the carpet python to type 24 loci of tetra-nucleotide and penta-nucleotide repeat motifs (pure, compound and complex included). The loci range in size between 70 and 550 bp. Seventeen of the loci are newly characterised with the inclusion of seven previously developed loci to facilitate cross-comparison with previous carpet python genotyping studies. Assays were optimised in accordance with human forensic profiling kits using one nanogram template DNA. Three loci are included in all three of the multiplex reactions as quality assurance markers, to ensure sample identity and genotyping accuracy is maintained across the three profiling assays. Allelic ladders have been developed for the three assays to ensure consistent and precise allele designation. A DNA reference database of allele frequencies is presented based on 249 samples collected from throughout the species native range. A small number of validation tests are conducted to demonstrate the utility of these multiplex assays. We suggest further appropriate validation tests that should be conducted prior to the application of the multiplex assays in criminal investigations involving carpet pythons. Copyright © 2018 Elsevier B.V. All rights reserved.
Replacing the IRAF/PyRAF Code-base at STScI: The Advanced Camera for Surveys (ACS)
NASA Astrophysics Data System (ADS)
Lucas, Ray A.; Desjardins, Tyler D.; STScI ACS (Advanced Camera for Surveys) Team
2018-06-01
IRAF/PyRAF are no longer viable on the latest hardware often used by HST observers, therefore STScI no longer actively supports IRAF or PyRAF for most purposes. STScI instrument teams are in the process of converting all of our data processing and analysis code from IRAF/PyRAF to Python, including our calibration reference file pipelines and data reduction software. This is exemplified by our latest ACS Data Handbook, version 9.0, which was recently published in February 2018. Examples of IRAF and PyRAF commands have now been replaced by code blocks in Python, with references linked to documentation on how to download and install the latest Python software via Conda and AstroConda. With the temporary exception of the ACS slitless spectroscopy tool aXe, all ACS-related software is now independent of IRAF/PyRAF. A concerted effort has been made across STScI divisions to help the astronomical community transition from IRAF/PyRAF to Python, with tools such as Python Jupyter notebooks being made to give users workable examples. In addition to our code changes, the new ACS data handbook discusses the latest developments in charge transfer efficiency (CTE) correction, bias de-striping, and updates to the creation and format of calibration reference files among other topics.
Mushu, a free- and open source BCI signal acquisition, written in Python.
Venthur, Bastian; Blankertz, Benjamin
2012-01-01
The following paper describes Mushu, a signal acquisition software for retrieval and online streaming of Electroencephalography (EEG) data. It is written, but not limited, to the needs of Brain Computer Interfacing (BCI). It's main goal is to provide a unified interface to EEG data regardless of the amplifiers used. It runs under all major operating systems, like Windows, Mac OS and Linux, is written in Python and is free- and open source software licensed under the terms of the GNU General Public License.
Introducing a New Software for Geodetic Analysis
NASA Astrophysics Data System (ADS)
Hjelle, G. A.; Dähnn, M.; Fausk, I.; Kirkvik, A. S.; Mysen, E.
2016-12-01
At the Norwegian Mapping Authority, we are currently developing Where, a newsoftware for geodetic analysis. Where is built on our experiences with theGeosat software, and will be able to analyse and combine data from VLBI, SLR,GNSS and DORIS. The software is mainly written in Python which has proved veryfruitful. The code is quick to write and the architecture is easily extendableand maintainable. The Python community provides a rich eco-system of tools fordoing data-analysis, including effective data storage and powerfulvisualization. Python interfaces well with other languages so that we can easilyreuse existing, well-tested code like the SOFA and IERS libraries. This presentation will show some of the current capabilities of Where,including benchmarks against other software packages. In addition we will reporton some simple investigations we have done using the software, and outline ourplans for further progress.
Smelter, Andrey; Astra, Morgan; Moseley, Hunter N B
2017-03-17
The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.
Hitz, Benjamin C; Rowe, Laurence D; Podduturi, Nikhil R; Glick, David I; Baymuradov, Ulugbek K; Malladi, Venkat S; Chan, Esther T; Davidson, Jean M; Gabdank, Idan; Narayana, Aditi K; Onate, Kathrina C; Hilton, Jason; Ho, Marcus C; Lee, Brian T; Miyasato, Stuart R; Dreszer, Timothy R; Sloan, Cricket A; Strattan, J Seth; Tanaka, Forrest Y; Hong, Eurie L; Cherry, J Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package.
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata
Podduturi, Nikhil R.; Glick, David I.; Baymuradov, Ulugbek K.; Malladi, Venkat S.; Chan, Esther T.; Davidson, Jean M.; Gabdank, Idan; Narayana, Aditi K.; Onate, Kathrina C.; Hilton, Jason; Ho, Marcus C.; Lee, Brian T.; Miyasato, Stuart R.; Dreszer, Timothy R.; Sloan, Cricket A.; Strattan, J. Seth; Tanaka, Forrest Y.; Hong, Eurie L.; Cherry, J. Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package. PMID:28403240
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Y; Tan, J; Jiang, S
Purpose: Plan specific quality assurance (QA) is an important step in high dose rate (HDR) brachytherapy to ensure the integrity of a treatment plan. The conventional approach is to assemble a set of plan screen-captures in a document and have an independent plan-checker to verify it. Not only is this approach cumbersome and time-consuming, using a document also limits the items that can be verified, hindering plan quality and patient safety. We have initiated efforts to develop a web-based HDR brachytherapy QA system called AutoBrachy QA, for comprehensive and efficient QA. This abstract reports a new plugin in this systemmore » for the QA of a cylinder HDR brachytherapy treatment. Methods: A cylinder plan QA module was developed using Python. It was plugged into our AutoBrachy QA system. This module extracted information from CT images and treatment plan. Image processing techniques were employed to obtain geometric parameters, e.g. cylinder diameter. A comprehensive set of eight geometrical and eight dosimetric features of the plan were validated against user specified planning parameter, such as prescription value, treatment depth and length, etc. A PDF document was generated, consisting of a summary QA sheet with all the QA results, as well as images showing plan details. Results: The cylinder QA program has been implemented in our clinic. To date, it has been used in 11 patient cases and was able to successfully perform QA tests in all of them. The QA program reduced the average plan QA time from 7 min using conventional manual approach to 0.5 min. Conclusion: Being a new module in our AutoBrachy QA system, an automated treatment plan QA module for cylinder HDR brachytherapy has been successfully developed and clinically implemented. This module improved clinical workflow and plan integrity compared to the conventional manual approach.« less
Humoral regulation of heart rate during digestion in pythons (Python molurus and Python regius).
Enok, Sanne; Simonsen, Lasse Stærdal; Pedersen, Signe Vesterskov; Wang, Tobias; Skovgaard, Nini
2012-05-15
Pythons exhibit a doubling of heart rate when metabolism increases several times during digestion. Pythons, therefore, represent a promising model organism to study autonomic cardiovascular regulation during the postprandial state, and previous studies show that the postprandial tachycardia is governed by a release of vagal tone as well as a pronounced stimulation from nonadrenergic, noncholinergic (NANC) factors. Here we show that infusion of plasma from digesting donor pythons elicit a marked tachycardia in fasting snakes, demonstrating that the NANC factor resides in the blood. Injections of the gastrin and cholecystokinin receptor antagonist proglumide had no effect on double-blocked heart rate or blood pressure. Histamine has been recognized as a NANC factor in the early postprandial period in pythons, but the mechanism of its release has not been identified. Mast cells represent the largest repository of histamine in vertebrates, and it has been speculated that mast cells release histamine during digestion. Treatment with the mast cell stabilizer cromolyn significantly reduced postprandial heart rate in pythons compared with an untreated group but did not affect double-blocked heart rate. While this study indicates that histamine induces postprandial tachycardia in pythons, its release during digestion is not stimulated by gastrin or cholecystokinin nor is its release from mast cells a stimulant of postprandial tachycardia.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.
Desai, Trunil S; Srivastava, Shireesh
2018-01-01
13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses
Desai, Trunil S.
2018-01-01
13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347
Falk, Bryan; Snow, Raymond W.; Reed, Robert
2016-01-01
Citizen-science programs have the potential to contribute to the management of invasive species, including Python molurus bivittatus (Burmese Python) in Florida. We characterized citizen-science–generated Burmese Python information from Everglades National Park (ENP) to explore how citizen science may be useful in this effort. As an initial step, we compiled and summarized records of Burmese Python observations and removals collected by both professional and citizen scientists in ENP during 2000–2014 and found many patterns of possible significance, including changes in annual observations and in demographic composition after a cold event. These patterns are difficult to confidently interpret because the records lack search-effort information, however, and differences among years may result from differences in search effort. We began collecting search-effort information in 2014 by leveraging an ongoing citizen-science program in ENP. Program participation was generally low, with most authorized participants in 2014 not searching for the snakes at all. We discuss the possible explanations for low participation, especially how the low likelihood of observing pythons weakens incentives to search. The monthly rate of Burmese Python observations for 2014 averaged ~1 observation for every 8 h of searching, but during several months, the rate was 1 python per >40 h of searching. These low observation-rates are a natural outcome of the snakes’ low detectability—few Burmese Pythons are likely to be observed even if many are present. The general inaccessibility of the southern Florida landscape also severely limits the effectiveness of using visual searches to find and remove pythons for the purposes of population control. Instead, and despite the difficulties in incentivizing voluntary participation, the value of citizen-science efforts in the management of the Burmese Python population is in collecting search-effort information.
Nonlinear dynamic macromodeling techniques for audio systems
NASA Astrophysics Data System (ADS)
Ogrodzki, Jan; Bieńkowski, Piotr
2015-09-01
This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.
The Programming Language Python In Earth System Simulations
NASA Astrophysics Data System (ADS)
Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.
2004-12-01
Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
A modern Python interface for the Generic Mapping Tools
NASA Astrophysics Data System (ADS)
Uieda, L.; Wessel, P.
2017-12-01
Figures generated by The Generic Mapping Tools (GMT) are present in countless publications across the Earth sciences. The command-line interface of GMT lends the tool its flexibility but also creates a barrier to entry for begginers. Meanwhile, adoption of the Python programming language has grown across the scientific community. This growth is largely due to the simplicity and low barrier to entry of the language and its ecosystem of tools. Thus, it is not surprising that there have been at least three attempts to create Python interfaces for GMT: gmtpy (github.com/emolch/gmtpy), pygmt (github.com/ian-r-rose/pygmt), and PyGMT (github.com/glimmer-cism/PyGMT). None of these projects are currently active and, with the exception of pygmt, they do not use the GMT Application Programming Interface (API) introduced in GMT 5. The two main Python libraries for plotting data on maps are the matplotlib Basemap toolkit (matplotlib.org/basemap) and Cartopy (scitools.org.uk/cartopy), both of which rely on matplotlib (matplotlib.org) as the backend for generating the figures. Basemap is known to have limitations and is being discontinued. Cartopy is an improvement over Basemap but is still bound by the speed and memory constraints of matplotlib. We present a new Python interface for GMT (GMT/Python) that makes use of the GMT API and of new features being developed for the upcoming GMT 6 release. The GMT/Python library is designed according to the norms and styles of the Python community. The library integrates with the scientific Python ecosystem by using the "virtual files" from the GMT API to implement input and output of Python data types (numpy "ndarray" for tabular data and xarray "Dataset" for grids). Other features include an object-oriented interface for creating figures, the ability to display figures in the Jupyter notebook, and descriptive aliases for GMT arguments (e.g., "region" instead of "R" and "projection" instead of "J"). GMT/Python can also serve as a backend for developing new high-level interfaces, which can help make GMT more accessible to beginners and more intuitive for Python users. GMT/Python is an open-source project hosted on Github (github.com/GenericMappingTools/gmt-python) and is in early stages of development. A first release will accompany the release of GMT 6, which is expected for early 2018.
Sanchez-Migallon Guzman, David; Garcia, Valentina E.; Layton, Marylee L.; Hoon-Hanks, Laura L.; Boback, Scott M.; Keel, M. Kevin; Drazenovich, Tracy
2017-01-01
ABSTRACT Inclusion body disease (IBD) is an infectious disease originally described in captive snakes. It has traditionally been diagnosed by the presence of large eosinophilic cytoplasmic inclusions and is associated with neurological, gastrointestinal, and lymphoproliferative disorders. Previously, we identified and established a culture system for a novel lineage of arenaviruses isolated from boa constrictors diagnosed with IBD. Although ample circumstantial evidence suggested that these viruses, now known as reptarenaviruses, cause IBD, there has been no formal demonstration of disease causality since their discovery. We therefore conducted a long-term challenge experiment to test the hypothesis that reptarenaviruses cause IBD. We infected boa constrictors and ball pythons by cardiac injection of purified virus. We monitored the progression of viral growth in tissues, blood, and environmental samples. Infection produced dramatically different disease outcomes in snakes of the two species. Ball pythons infected with Golden Gate virus (GoGV) and with another reptarenavirus displayed severe neurological signs within 2 months, and viral replication was detected only in central nervous system tissues. In contrast, GoGV-infected boa constrictors remained free of clinical signs for 2 years, despite high viral loads and the accumulation of large intracellular inclusions in multiple tissues, including the brain. Inflammation was associated with infection in ball pythons but not in boa constrictors. Thus, reptarenavirus infection produces inclusions and inclusion body disease, although inclusions per se are neither necessarily associated with nor required for disease. Although the natural distribution of reptarenaviruses has yet to be described, the different outcomes of infection may reflect differences in geographical origin. IMPORTANCE New DNA sequencing technologies have made it easier than ever to identify the sequences of microorganisms in diseased tissues, i.e., to identify organisms that appear to cause disease, but to be certain that a candidate pathogen actually causes disease, it is necessary to provide additional evidence of causality. We have done this to demonstrate that reptarenaviruses cause inclusion body disease (IBD), a serious transmissible disease of snakes. We infected boa constrictors and ball pythons with purified reptarenavirus. Ball pythons fell ill within 2 months of infection and displayed signs of neurological disease typical of IBD. In contrast, boa constrictors remained healthy over 2 years, despite high levels of virus throughout their bodies. This difference matches previous reports that pythons are more susceptible to IBD than boas and could reflect the possibility that boas are natural hosts of these viruses in the wild. PMID:28515291
BioInt: an integrative biological object-oriented application framework and interpreter.
Desai, Sanket; Burra, Prasad
2015-01-01
BioInt, a biological programming application framework and interpreter, is an attempt to equip the researchers with seamless integration, efficient extraction and effortless analysis of the data from various biological databases and algorithms. Based on the type of biological data, algorithms and related functionalities, a biology-specific framework was developed which has nine modules. The modules are a compilation of numerous reusable BioADTs. This software ecosystem containing more than 450 biological objects underneath the interpreter makes it flexible, integrative and comprehensive. Similar to Python, BioInt eliminates the compilation and linking steps cutting the time significantly. The researcher can write the scripts using available BioADTs (following C++ syntax) and execute them interactively or use as a command line application. It has features that enable automation, extension of the framework with new/external BioADTs/libraries and deployment of complex work flows.
NASA Astrophysics Data System (ADS)
Kochanov, R. V.; Gordon, I. E.; Rothman, L. S.; Wcisło, P.; Hill, C.; Wilzewski, J. S.
2016-07-01
The HITRAN Application Programming Interface (HAPI) is presented. HAPI is a free Python library, which extends the capabilities of the HITRANonline interface (www.hitran.org) and can be used to filter and process the structured spectroscopic data. HAPI incorporates a set of tools for spectra simulation accounting for the temperature, pressure, optical path length, and instrument properties. HAPI is aimed to facilitate the spectroscopic data analysis and the spectra simulation based on the line-by-line data, such as from the HITRAN database [JQSRT (2013) 130, 4-50], allowing the usage of the non-Voigt line profile parameters, custom temperature and pressure dependences, and partition sums. The HAPI functions allow the user to control the spectra simulation and data filtering process via a set of the function parameters. HAPI can be obtained at its homepage www.hitran.org/hapi.
A Survey of Visualization Tools Assessed for Anomaly-Based Intrusion Detection Analysis
2014-04-01
objective? • What vulnerabilities exist in the target system? • What damage or other consequences are likely? • What exploit scripts or other attack...languages C, R, and Python; no response capabilities. JUNG https://blogs.reucon.com/asterisk- java /tag/visualization/ Create custom layouts and can...annotate graphs, links, nodes with any Java data type. Must be familiar with coding in Java to call the routines; no monitoring or response
Leveraging Python Interoperability Tools to Improve Sapphire's Usability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gezahegne, A; Love, N S
2007-12-10
The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.
The zoonotic implications of pentastomiasis in the royal python (python regius).
Ayinmode, Ab; Adedokun, Ao; Aina, A; Taiwo, V
2010-09-01
Pentastomes are worm-like endoparasites of the phylum Pentastomida found principally in the respiratory tract of reptiles, birds, and mammals. They cause a zoonotic disease known as pentastomiasis in humans and other mammals. The autopsy of a Nigerian royal python (Python regius) revealed two yellowish-white parasites in the lungs, tissue necrosis and inflammatory lesions. The parasite was confirmed to be Armillifer spp (Pentastomid); this is the first recorded case of pentastomiasis in the royal python (Python regius) in Nigeria. This report may be an alert of the possibility of on-going zoonotic transmission of pentastomiasis from snake to man, especially in the sub-urban/rural areas of Nigeria and other West African countries where people consume snake meat.
NASA Astrophysics Data System (ADS)
Jenness, Tim; Robitaille, Thomas; Tollerud, Erik; Mumford, Stuart; Cruz, Kelle
2016-04-01
The second Python in Astronomy conference will be held from 21-25 March 2016 at the University of Washington eScience Institute in Seattle, WA, USA. Similarly to the 2015 meeting (which was held at the Lorentz Center), we are aiming to bring together researchers, Python developers, users, and educators. The conference will include presentations, tutorials, unconference sessions, and coding sprints. In addition to sharing information about state-of-the art Python Astronomy packages, the workshop will focus on improving interoperability between astronomical Python packages, providing training for new open-source contributors, and developing educational materials for Python in Astronomy. The meeting is therefore not only aimed at current developers, but also users and educators who are interested in being involved in these efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Erik; Blume-Kohout, Robin; Rudinger, Kenneth
PyGSTi is an implementation of Gate Set Tomography in the python programming language. Gate Set Tomography (GST) is a theory and protocol for simultaneously estimating the state preparation, gate operations, and measurement effects of a physical system of one or many quantum bits (qubits). These estimates are based entirely on the statistics of experimental measurements, and their interpretation and analysis can provide a detailed understanding of the types of errors/imperfections in the physical system. In this way, GST provides not only a means of certifying the "goodness" of qubits but also a means of debugging (i.e. improving) them.
NASA Astrophysics Data System (ADS)
Akpojotor, Godfrey; Ehwerhemuepha, Louis; Amromanoh, Ogheneriobororue
2013-03-01
The presence of physical systems whose characteristics change in a seemingly erratic manner gives rise to the study of chaotic systems. The characteristics of these systems are due to their hypersensitivity to changes in initial conditions. In order to understand chaotic systems, some sort of simulation and visualization is pertinent. Consequently, in this work, we have simulated and graphically visualized chaos in a driven nonlinear pendulum as a means of introducing chaotic systems. The results obtained which highlight the hypersensitivity of the pendulum are used to discuss the effectiveness of teaching and learning the physics of chaotic system using Python. This study is one of the many studies under the African Computational Science and Engineering Tour Project (PASET) which is using Python to model, simulate and visualize concepts, laws and phenomena in Science and Engineering to compliment the teaching/learning of theory and experiment.
Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.
2013-01-01
SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.
Python tools for rapid development, calibration, and analysis of generalized groundwater-flow models
NASA Astrophysics Data System (ADS)
Starn, J. J.; Belitz, K.
2014-12-01
National-scale water-quality data sets for the United States have been available for several decades; however, groundwater models to interpret these data are available for only a small percentage of the country. Generalized models may be adequate to explain and project groundwater-quality trends at the national scale by using regional scale models (defined as watersheds at or between the HUC-6 and HUC-8 levels). Coast-to-coast data such as the National Hydrologic Dataset Plus (NHD+) make it possible to extract the basic building blocks for a model anywhere in the country. IPython notebooks have been developed to automate the creation of generalized groundwater-flow models from the NHD+. The notebook format allows rapid testing of methods for model creation, calibration, and analysis. Capabilities within the Python ecosystem greatly speed up the development and testing of algorithms. GeoPandas is used for very efficient geospatial processing. Raster processing includes the Geospatial Data Abstraction Library and image processing tools. Model creation is made possible through Flopy, a versatile input and output writer for several MODFLOW-based flow and transport model codes. Interpolation, integration, and map plotting included in the standard Python tool stack also are used, making the notebook a comprehensive platform within on to build and evaluate general models. Models with alternative boundary conditions, number of layers, and cell spacing can be tested against one another and evaluated by using water-quality data. Novel calibration criteria were developed by comparing modeled heads to land-surface and surface-water elevations. Information, such as predicted age distributions, can be extracted from general models and tested for its ability to explain water-quality trends. Groundwater ages then can be correlated with horizontal and vertical hydrologic position, a relation that can be used for statistical assessment of likely groundwater-quality conditions. Convolution with age distributions can be used to quickly ascertain likely future water-quality conditions. Although these models are admittedly very general and are still being tested, the hope is that they will be useful for answering questions related to water quality at the regional scale.
Luminal and systemic signals trigger intestinal adaptation in the juvenile python.
Secor, S M; Whang, E E; Lane, J S; Ashley, S W; Diamond, J
2000-12-01
Juvenile pythons undergo large rapid upregulation of intestinal mass and intestinal transporter activities upon feeding. Because it is also easy to do surgery on pythons and to maintain them in the laboratory, we used a python model to examine signals and agents for intestinal adaptation. We surgically isolated the middle third of the small intestine from enteric continuity, leaving its mesenteric nerve and vascular supply intact. Intestinal continuity was restored by an end-to-end anastomosis between the proximal and distal thirds. Within 24 h of the snake's feeding, the reanastomosed proximal and distal segments (receiving luminal nutrients) had upregulated amino acid and glucose uptakes by up to 15-fold, had doubled intestinal mass, and thereby soon achieved total nutrient uptake capacities equal to those of the normal fed full-length intestine. At this time, however, the isolated middle segment, receiving no luminal nutrients, experienced no changes from the fasted state in either nutrient uptakes or in morphology. By 3 days postfeeding, the isolated middle segment had upregulated nutrient uptakes to the same levels as the reanastomosed proximal and distal segments, but it still lacked any appreciable morphological response. These contrasting results for the reanastomosed intestine and for the isolated middle segment suggest that luminal nutrients and/or pancreatic biliary secretions are the agents triggering rapid upregulation of transporters and of intestinal mass and that systemic nerve or hormonal signals later trigger transporter regulation but no trophic response.
Structure and function of the hearts of lizards and snakes.
Jensen, Bjarke; Moorman, Antoon F M; Wang, Tobias
2014-05-01
With approximately 7000 species, snakes and lizards, collectively known as squamates, are by far the most species-rich group of reptiles. It was from reptile-like ancestors that mammals and birds evolved and squamates can be viewed as phylogenetically positioned between them and fishes. Hence, their hearts have been studied for more than a century yielding insights into the group itself and into the independent evolution of the fully divided four-chambered hearts of mammals and birds. Structurally the heart is complex and debates persist on rudimentary issues such as identifying structures critical to understanding ventricle function. In seeking to resolve these controversies we have generated three-dimensional (3D) models in portable digital format (pdf) of the anaconda and anole lizard hearts ('typical' squamate hearts) and the uniquely specialized python heart with comprehensive annotations of structures and cavities. We review the anatomy and physiology of squamate hearts in general and emphasize the unique features of pythonid and varanid lizard hearts that endow them with mammal-like blood pressures. Excluding pythons and varanid lizards it is concluded that the squamate heart has a highly consistent design including a disproportionately large right side (systemic venous) probably due to prevailing pulmonary bypass (intraventricular shunting). Unfortunately, investigations on rudimentary features are sparse. We therefore point out gaps in our knowledge, such as the size and functional importance of the coronary vasculature and of the first cardiac chamber, the sinus venosus, and highlight areas with implications for vertebrate cardiac evolution. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.
Review of Software Platforms for Agent Based Models
2008-04-01
EINSTein 4.3.2 Battlefield Python (optional, for batch runs) MANA 4.3.3 Battlefield N/A MASON 4.3.4 General Java NetLogo 4.3.5 General Logo-variant...through the use of relatively simple Python scripts. It also has built-in functions for parameter sweeps, and can plot the resulting fitness landscape ac...Nonetheless its ease of use, and support for automatic drawing of agents in 2D or 3D2 makes this a suitable platform for beginner programmers. 2Only in the
PREdator: a python based GUI for data analysis, evaluation and fitting
2014-01-01
The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.
TRIPPy: Python-based Trailed Source Photometry
NASA Astrophysics Data System (ADS)
Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey
2016-05-01
TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.
Schilliger, Lionel; Chetboul, Valérie; Damoiseaux, Cécile; Nicolier, Alexandra
2016-12-01
Echocardiography is an established and noninvasive diagnostic tool used in herpetologic cardiology. Various cardiac lesions have been previously described in reptiles with the exception of restrictive cardiomyopathy. In this case report, restrictive cardiomyopathy and congestive heart failure associated with left atrial and sinus venosus dilation were diagnosed in a 2-yr-old captive lethargic McDowell's carpet python ( Morelia spilota mcdowelli), based on echocardiographic, Doppler, and histopathologic examinations. This cardiomyopathy was also associated with thrombosis within the sinus venosus.
Eddylicious: A Python package for turbulent inflow generation
NASA Astrophysics Data System (ADS)
Mukha, Timofey; Liefvendahl, Mattias
2018-01-01
A Python package for generating inflow for scale-resolving computer simulations of turbulent flow is presented. The purpose of the package is to unite existing inflow generation methods in a single code-base and make them accessible to users of various Computational Fluid Dynamics (CFD) solvers. The currently existing functionality consists of an accurate inflow generation method suitable for flows with a turbulent boundary layer inflow and input/output routines for coupling with the open-source CFD solver OpenFOAM.
CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer
Carter, Hannah; Diekhans, Mark; Ryan, Michael C.; Karchin, Rachel
2011-01-01
Summary: Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. Availability and Implementation: MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python <3.0*, MySQL server >5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM. Contact: karchin@jhu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21685053
Consumption of bird eggs by invasive Burmese Pythons in Florida
Dove, Carla J.; Reed, Robert N.; Snow, Ray W.
2012-01-01
Burmese Pythons (Python molurus bivittatus or P. bivittatus) have been reported to consume 25 species of adult birds in Everglades National Park, Florida (Dove et al. 2011), but until now no records documented this species eating bird eggs. Here we report three recent cases of bird-egg consumption by Burmese Pythons and discuss egg-eating in basal snakes.
Acariasis on pet Burmese python, Python molurus bivittatus in Malaysia.
Mariana, A; Vellayan, S; Halimaton, I; Ho, T M
2011-03-01
To identify the acari present on pet Burmese pythons in Malaysia and to determine whether there is any potential public health risk related to handling of the snakes. Two sub-adult Burmese pythons kept as pets for a period of about 6 to 7 months by different owners, were brought to an exotic animal practice for treatment. On a complete medical examination, some ticks and mites (acari) were detected beneath the dorsal and ventral scales along body length of the snakes. Ticks were directly identified and mites were mounted prior to identification. A total of 12 ticks represented by 3 males, 2 females and 7 nymphal stages of Rhipicephalus sanguineus (R. sanguineus) were extracted from the first python while the other one was with 25 female Ophionyssus natricis (O. natricis) mesostigmatid mites. Only adult female mites were found. These mites are common ectoparasites of Burmese pythons. Both the acarine species found on the Burmese pythons are known vectors of pathogens. This is the first record that R. sanguineus has been reported from a pet Burmese python in Malaysia. Copyright © 2011 Hainan Medical College. Published by Elsevier B.V. All rights reserved.
Python in the NERSC Exascale Science Applications Program for Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack
We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less
Responses of python gastrointestinal regulatory peptides to feeding
Secor, Stephen M.; Fehsenfeld, Drew; Diamond, Jared; Adrian, Thomas E.
2001-01-01
In the Burmese python (Python molurus), the rapid up-regulation of gastrointestinal (GI) function and morphology after feeding, and subsequent down-regulation on completing digestion, are expected to be mediated by GI hormones and neuropeptides. Hence, we examined postfeeding changes in plasma and tissue concentrations of 11 GI hormones and neuropeptides in the python. Circulating levels of cholecystokinin (CCK), glucose-dependent insulinotropic peptide (GIP), glucagon, and neurotensin increase by respective factors of 25-, 6-, 6-, and 3.3-fold within 24 h after feeding. In digesting pythons, the regulatory peptides neurotensin, somatostatin, motilin, and vasoactive intestinal peptide occur largely in the stomach, GIP and glucagon in the pancreas, and CCK and substance P in the small intestine. Tissue concentrations of CCK, GIP, and neurotensin decline with feeding. Tissue distributions and molecular forms (as determined by gel-permeation chromatography) of many python GI peptides are similar or identical to those of their mammalian counterparts. The postfeeding release of GI peptides from tissues, and their concurrent rise in plasma concentrations, suggests that they play a role in regulating python-digestive responses. These large postfeeding responses, and similarities of peptide structure with mammals, make pythons an attractive model for studying GI peptides. PMID:11707600
The Earth Data Analytic Services (EDAS) Framework
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; Duffy, D.
2017-12-01
Faced with unprecedented growth in earth data volume and demand, NASA has developed the Earth Data Analytic Services (EDAS) framework, a high performance big data analytics framework built on Apache Spark. This framework enables scientists to execute data processing workflows combining common analysis operations close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted earth data analysis tools (ESMF, CDAT, NCO, etc.). EDAS utilizes a dynamic caching architecture, a custom distributed array framework, and a streaming parallel in-memory workflow for efficiently processing huge datasets within limited memory spaces with interactive response times. EDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using direct web service calls, a Python script, a Unix-like shell client, or a JavaScript-based web application. New analytic operations can be developed in Python, Java, or Scala (with support for other languages planned). Client packages in Python, Java/Scala, or JavaScript contain everything needed to build and submit EDAS requests. The EDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service enables decision makers to compare multiple reanalysis datasets and investigate trends, variability, and anomalies in earth system dynamics around the globe.
pyNBS: A Python implementation for network-based stratification of tumor mutations.
Huang, Justin K; Jia, Tongqiu; Carlin, Daniel E; Ideker, Trey
2018-03-28
We present pyNBS: a modularized Python 2.7 implementation of the network-based stratification (NBS) algorithm for stratifying tumor somatic mutation profiles into molecularly and clinically relevant subtypes. In addition to release of the software, we benchmark its key parameters and provide a compact cancer reference network that increases the significance of tumor stratification using the NBS algorithm. The structure of the code exposes key steps of the algorithm to foster further collaborative development. The package, along with examples and data, can be downloaded and installed from the URL http://www.github.com/huangger/pyNBS/. jkh013@ucsd.edu.
Fatty acids identified in the Burmese python promote beneficial cardiac growth.
Riquelme, Cecilia A; Magida, Jason A; Harrison, Brooke C; Wall, Christopher E; Marr, Thomas G; Secor, Stephen M; Leinwand, Leslie A
2011-10-28
Burmese pythons display a marked increase in heart mass after a large meal. We investigated the molecular mechanisms of this physiological heart growth with the goal of applying this knowledge to the mammalian heart. We found that heart growth in pythons is characterized by myocyte hypertrophy in the absence of cell proliferation and by activation of physiological signal transduction pathways. Despite high levels of circulating lipids, the postprandial python heart does not accumulate triglycerides or fatty acids. Instead, there is robust activation of pathways of fatty acid transport and oxidation combined with increased expression and activity of superoxide dismutase, a cardioprotective enzyme. We also identified a combination of fatty acids in python plasma that promotes physiological heart growth when injected into either pythons or mice.
Integrating neuroinformatics tools in TheVirtualBrain.
Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.
Integrating neuroinformatics tools in TheVirtualBrain
Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617
First record of invasive Burmese Python oviposition and brooding inside an anthropogenic structure
Hanslowe, Emma; Falk, Bryan; Collier, Michelle A. M.; Josimovich, Jillian; Rahill, Thomas; Reed, Robert
2016-01-01
We discovered an adult female Python bivittatus (Burmese Python) coiled around a clutch of 25 eggs in a cement culvert in Flamingo, FL, in Everglades National Park. To our knowledge, this is the first record of an invasive Burmese Python laying eggs and brooding inside an anthropogenic structure in Florida. A 92% hatch-success rate suggests that the cement culvert provided suitable conditions for oviposition, embryonic development, and hatching. Given the plenitude of such anthropogenic structures across the landscape, available sites for oviposition and brooding may not be limiting for the invasive Burmese Python population.
Reed, Robert N.; Hart, Kristen M.; Rodda, Gordon H.; Mazzotti, Frank J.; Snow, Ray W.; Cherkiss, Michael; Rozar, Rondald; Goetz, Scott
2011-01-01
Conclusions: The trap trial captured a relatively small proportion of the pythons that appeared to be present in the study area, although previous research suggests that trap capture rates improve with additional testing of alternative trap designs. Potential negative impacts to non-target species were minimal. Low python capture rates may have been associated with extremely high local prey abundances during the trap experiment. Implications: Results of this trial illustrate many of the challenges in implementing and interpreting results from tests of control tools for large cryptic predators such as Burmese pythons.
Re-imagining a Stata/Python Combination
NASA Technical Reports Server (NTRS)
Fiedler, James
2013-01-01
At last year's Stata Conference, I presented some ideas for combining Stata and the Python programming language within a single interface. Two methods were presented: in one, Python was used to automate Stata; in the other, Python was used to send simulated keystrokes to the Stata GUI. The first method has the drawback of only working in Windows, and the second can be slow and subject to character input limits. In this presentation, I will demonstrate a method for achieving interaction between Stata and Python that does not suffer these drawbacks, and I will present some examples to show how this interaction can be useful.
ACPYPE - AnteChamber PYthon Parser interfacE.
Sousa da Silva, Alan W; Vranken, Wim F
2012-07-23
ACPYPE (or AnteChamber PYthon Parser interfacE) is a wrapper script around the ANTECHAMBER software that simplifies the generation of small molecule topologies and parameters for a variety of molecular dynamics programmes like GROMACS, CHARMM and CNS. It is written in the Python programming language and was developed as a tool for interfacing with other Python based applications such as the CCPN software suite (for NMR data analysis) and ARIA (for structure calculations from NMR data). ACPYPE is open source code, under GNU GPL v3, and is available as a stand-alone application at http://www.ccpn.ac.uk/acpype and as a web portal application at http://webapps.ccpn.ac.uk/acpype. We verified the topologies generated by ACPYPE in three ways: by comparing with default AMBER topologies for standard amino acids; by generating and verifying topologies for a large set of ligands from the PDB; and by recalculating the structures for 5 protein-ligand complexes from the PDB. ACPYPE is a tool that simplifies the automatic generation of topology and parameters in different formats for different molecular mechanics programmes, including calculation of partial charges, while being object oriented for integration with other applications.
NASA Astrophysics Data System (ADS)
Mangantar Pardamean Sianturi, Markus; Jumilawaty, Erni; Delvian; Hartanto, Adrian
2018-03-01
Blood python (Python brongersmai Stull, 1938) is one of heavily exploited wildlife in Indonesia. The high demands on its skin trade have made its harvesting regulated under quota-based setting by the government to prevent over-harvesting. To gain understanding on the sustainability of P. brongersmai in the wild, biological characters of wild-caught specimens were studied. Samples were collected from two slaughterhouses from Rantau Prapat and Langkat. Parameters measured were morphological (Snout-vent length (SVL), body mass, abdomen width) and anatomical characters (Fat classes). Total samples of P. brongersmai in this research were 541 with 269 male and 272 female snakes. Female snakes had the highest proportion of individuals with the best quality of abdominal fat reserves (Class 3). Linear models are built and tested for its significance in relation between fat classes as anatomical characters and morphological characters. All tested morphological characters were significant in female snakes. By using linear equation models, we generate size limit to prioritize harvesting in the future. We suggest the use of SVL and stomach width ranging between 139,7 – 141,5 cm and 24,72 – 25,71 cm respectively to achieve sustainability of P. brongersmai in the wild.
Detection and phylogenetic analysis of a new adenoviral polymerase gene in reptiles in Korea.
Bak, Eun-Jung; Jho, Yeonsook; Woo, Gye-Hyeong
2018-06-01
Over a period of 7 years (2004-2011), samples from 34 diseased reptiles provided by local governments, zoos, and pet shops were tested for viral infection. Animals were diagnosed based on clinical signs, including loss of appetite, diarrhea, rhinorrhea, and unexpected sudden death. Most of the exotic animals had gastrointestinal problems, such as mucosal redness and ulcers, while the native animals had no clinical symptoms. Viral sequences were found in seven animals. Retroviral genes were amplified from samples from five Burmese pythons (Python molurus bivittatus), an adenovirus was detected in a panther chameleon (Furcifer pardalis), and an adenovirus and a paramyxovirus were detected in a tropical girdled lizard (Cordylus tropidosternum). Phylogenetic analysis of retroviruses and paramyxoviruses showed the highest sequence identity to both a Python molurus endogenous retrovirus and a Python curtus endogenous retrovirus and to a lizard isolate, respectively. Partial sequencing of an adenoviral DNA polymerase gene from the lizard isolate suggested that the corresponding virus was a novel isolate different from the reference strain (accession no. AY576677.1). The virus was not isolated but was detected, using molecular genetic techniques, in a lizard raised in a pet shop. This animal was also coinfected with a paramyxovirus.
PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.
Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.
PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data
Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.
Sochat, Vanessa V; Prybol, Cameron J; Kurtzer, Gregory M
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers
Prybol, Cameron J.; Kurtzer, Gregory M.
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161
Lakhujani, Vijay; Badapanda, Chandan
2017-06-01
QIIME (Quantitative Insights Into Microbial Ecology) is one of the most popular open-source bioinformatics suite for performing metagenome, 16S rRNA amplicon and Internal Transcribed Spacer (ITS) data analysis. Although, it is very comprehensive and powerful tool, it lacks a method to provide publication ready taxonomic pie charts. The script plot_taxa_summary . py bundled with QIIME generate a html file and a folder containing taxonomic pie chart and legend as separate images. The images have randomly generated alphanumeric names. Therefore, it is difficult to associate the pie chart with the legend and the corresponding sample identifier. Even if the option to have the legend within the html file is selected while executing plot_taxa_summary . py , it is very tedious to crop a complete image (having both the pie chart and the legend) due to unequal image sizes. It requires a lot of time to manually prepare the pie charts for multiple samples for publication purpose. Moreover, there are chances of error while identifying the pie chart and legend pair due to random alphanumeric names of the images. To bypass all these bottlenecks and make this process efficient, we have developed a python based program, prepare_taxa_charts . py , to automate the renaming, cropping and merging of taxonomic pie chart and corresponding legend image into a single, good quality publication ready image. This program not only augments the functionality of plot_taxa_summary . py but is also very fast in terms of CPU time and user friendly.
Programming biological models in Python using PySB.
Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K
2013-01-01
Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis.
Nunez-Iglesias, Juan; Kennedy, Ryan; Plaza, Stephen M.; Chakraborty, Anirban; Katz, William T.
2014-01-01
The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them. PMID:24772079
Programming biological models in Python using PySB
Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K
2013-01-01
Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis. PMID:23423320
Python erythrocytes are resistant to α-hemolysin from Escherichia coli.
Larsen, Casper K; Skals, Marianne; Wang, Tobias; Cheema, Muhammad U; Leipziger, Jens; Praetorius, Helle A
2011-12-01
α-Hemolysin (HlyA) from Escherichia coli lyses mammalian erythrocytes by creating nonselective cation pores in the membrane. Pore insertion triggers ATP release and subsequent P2X receptor and pannexin channel activation. Blockage of either P2X receptors or pannexin channels reduces HlyA-induced hemolysis. We found that erythrocytes from Python regius and Python molurus are remarkably resistant to HlyA-induced hemolysis compared to human and Trachemys scripta erythrocytes. HlyA concentrations that induced maximal hemolysis of human erythrocytes did not affect python erythrocytes, but increasing the HlyA concentration 40-fold did induce hemolysis. Python erythrocytes were more resistant to osmotic stress than human erythrocytes, but osmotic stress tolerance per se did not confer HlyA resistance. Erythrocytes from T. scripta, which showed higher osmotic resistance than python erythrocytes, were as susceptible to HlyA as human erythrocytes. Therefore, we tested whether python erythrocytes lack the purinergic signalling known to amplify HlyA-induced hemolysis in human erythrocytes. P. regius erythrocytes increased intracellular Ca²⁺ concentration and reduced cell volume when exposed to 3 mM ATP, indicating the presence of a P2X₇-like receptor. In addition, scavenging extracellular ATP or blocking P2 receptors or pannexin channels reduced the HlyA-induced hemolysis. We tested whether the low HlyA sensitivity resulted from low affinity of HlyA to the python erythrocyte membrane. We found comparable incorporation of HlyA into human and python erythrocyte membranes. Taken together, the remarkable HlyA resistance of python erythrocytes was not explained by increased osmotic resistance, lack of purinergic hemolysis amplification, or differences in HlyA affinity.
NASA Astrophysics Data System (ADS)
Nguyen, A.; Mueller, C.; Brooks, A. N.; Kislik, E. A.; Baney, O. N.; Ramirez, C.; Schmidt, C.; Torres-Perez, J. L.
2014-12-01
The Sierra Nevada is experiencing changes in hydrologic regimes, such as decreases in snowmelt and peak runoff, which affect forest health and the availability of water resources. Currently, the USDA Forest Service Region 5 is undergoing Forest Plan revisions to include climate change impacts into mitigation and adaptation strategies. However, there are few processes in place to conduct quantitative assessments of forest conditions in relation to mountain hydrology, while easily and effectively delivering that information to forest managers. To assist the USDA Forest Service, this study is the final phase of a three-term project to create a Decision Support System (DSS) to allow ease of access to historical and forecasted hydrologic, climatic, and terrestrial conditions for the entire Sierra Nevada. This data is featured within three components of the DSS: the Mapping Viewer, Statistical Analysis Portal, and Geospatial Data Gateway. Utilizing ArcGIS Online, the Sierra DSS Mapping Viewer enables users to visually analyze and locate areas of interest. Once the areas of interest are targeted, the Statistical Analysis Portal provides subbasin level statistics for each variable over time by utilizing a recently developed web-based data analysis and visualization tool called Plotly. This tool allows users to generate graphs and conduct statistical analyses for the Sierra Nevada without the need to download the dataset of interest. For more comprehensive analysis, users are also able to download datasets via the Geospatial Data Gateway. The third phase of this project focused on Python-based data processing, the adaptation of the multiple capabilities of ArcGIS Online and Plotly, and the integration of the three Sierra DSS components within a website designed specifically for the USDA Forest Service.
HEDEA: A Python Tool for Extracting and Analysing Semi-structured Information from Medical Records
Aggarwal, Anshul; Garhwal, Sunita
2018-01-01
Objectives One of the most important functions for a medical practitioner while treating a patient is to study the patient's complete medical history by going through all records, from test results to doctor's notes. With the increasing use of technology in medicine, these records are mostly digital, alleviating the problem of looking through a stack of papers, which are easily misplaced, but some of these are in an unstructured form. Large parts of clinical reports are in written text form and are tedious to use directly without appropriate pre-processing. In medical research, such health records may be a good, convenient source of medical data; however, lack of structure means that the data is unfit for statistical evaluation. In this paper, we introduce a system to extract, store, retrieve, and analyse information from health records, with a focus on the Indian healthcare scene. Methods A Python-based tool, Healthcare Data Extraction and Analysis (HEDEA), has been designed to extract structured information from various medical records using a regular expression-based approach. Results The HEDEA system is working, covering a large set of formats, to extract and analyse health information. Conclusions This tool can be used to generate analysis report and charts using the central database. This information is only provided after prior approval has been received from the patient for medical research purposes. PMID:29770248
HEDEA: A Python Tool for Extracting and Analysing Semi-structured Information from Medical Records.
Aggarwal, Anshul; Garhwal, Sunita; Kumar, Ajay
2018-04-01
One of the most important functions for a medical practitioner while treating a patient is to study the patient's complete medical history by going through all records, from test results to doctor's notes. With the increasing use of technology in medicine, these records are mostly digital, alleviating the problem of looking through a stack of papers, which are easily misplaced, but some of these are in an unstructured form. Large parts of clinical reports are in written text form and are tedious to use directly without appropriate pre-processing. In medical research, such health records may be a good, convenient source of medical data; however, lack of structure means that the data is unfit for statistical evaluation. In this paper, we introduce a system to extract, store, retrieve, and analyse information from health records, with a focus on the Indian healthcare scene. A Python-based tool, Healthcare Data Extraction and Analysis (HEDEA), has been designed to extract structured information from various medical records using a regular expression-based approach. The HEDEA system is working, covering a large set of formats, to extract and analyse health information. This tool can be used to generate analysis report and charts using the central database. This information is only provided after prior approval has been received from the patient for medical research purposes.
NASA Astrophysics Data System (ADS)
Fatland, R.; Tan, A.; Arendt, A. A.
2016-12-01
We describe a Python-based implementation of a PostgreSQL database accessed through an Application Programming Interface (API) hosted on the Amazon Web Services public cloud. The data is geospatial and concerns hydrological model results in the glaciated catchment basins of southcentral and southeast Alaska. This implementation, however, is intended to be generalized to other forms of geophysical data, particularly data that is intended to be shared across a collaborative team or publicly. An example (moderate-size) dataset is provided together with the code base and a complete installation tutorial on GitHub. An enthusiastic scientist with some familiarity with software installation can replicate the example system in two hours. This installation includes database, API, a test Client and a supporting Jupyter Notebook, specifically oriented towards Python 3 and markup text to comprise an executable paper. The installation 'on the cloud' often engenders discussion and consideration of cloud cost and safety. By treating the process as somewhat "cookbook" we hope to first demonstrate the feasibility of the proposition. A discussion of cost and data security is provided in this presentation and in the accompanying tutorial/documentation. This geospatial data system case study is part of a larger effort at the University of Washington to enable research teams to take advantage of the public cloud to meet challenges in data management and analysis.
MEG and EEG data analysis with MNE-Python.
Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti
2013-12-26
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.
MEG and EEG data analysis with MNE-Python
Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A.; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti
2013-01-01
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne. PMID:24431986
PEITH(Θ): perfecting experiments with information theory in Python with GPU support.
Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane
2018-04-01
Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.
New Version of SeismicHandler (SHX) based on ObsPy
NASA Astrophysics Data System (ADS)
Stammler, Klaus; Walther, Marcus
2016-04-01
The command line version of SeismicHandler (SH), a scientific analysis tool for seismic waveform data developed around 1990, has been redesigned in the recent years, based on a project funded by the Deutsche Forschungsgemeinschaft (DFG). The aim was to address new data access techniques, simplified metadata handling and a modularized software design. As a result the program was rewritten in Python in its main parts, taking advantage of simplicity of this script language and its variety of well developed software libraries, including ObsPy. SHX provides an easy access to waveforms and metadata via arclink and FDSN webservice protocols, also access to event catalogs is implemented. With single commands whole networks or stations within a certain area may be read in, the metadata are retrieved from the servers and stored in a local database. For data processing the large set of SH commands is available, as well as the SH scripting language. Via this SH language scripts or additional Python modules the command set of SHX is easily extendable. The program is open source, tested on Linux operating systems, documentation and download is found at URL "https://www.seismic-handler.org/".
Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S
2018-04-30
Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid detection, facilitating use by a broad range of impedimetric biosensor users. This post hoc analysis tool can serve as a launchpad for the convergence of nanobiosensors in planetary health monitoring applications based on mobile phone hardware.
Using WNTR to Model Water Distribution System Resilience ...
The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres
Stata Hybrids: Updates and Ideas
NASA Technical Reports Server (NTRS)
Fieldler, James
2014-01-01
At last year's Stata conference I presented two projects for using Python with Stata: a plugin that embeds the Python programming language within Stata and code for using Stata data sets in Python. In this talk I will describe some small improvements being made to these projects, and I will present other ideas for combining tools with Stata. Some of these ideas use Python, some use JavaScript and a web browser.
Boback, Scott M.; Snow, Ray W.; Hsu, Teresa; Peurach, Suzanne C.; Dove, Carla J.; Reed, Robert N.
2016-01-01
Snakes have become successful invaders in a wide variety of ecosystems worldwide. In southern Florida, USA, the Burmese python (Python molurus bivittatus) has become established across thousands of square kilometers including all of Everglades National Park (ENP). Both experimental and correlative data have supported a relationship between Burmese python predation and declines or extirpations of mid- to large-sized mammals in ENP. In June 2013 a large python (4.32 m snout-vent length, 48.3 kg) was captured and removed from the park. Subsequent necropsy revealed a massive amount of fecal matter (79 cm in length, 6.5 kg) within the snake’s large intestine. A comparative examination of bone, teeth, and hooves extracted from the fecal contents revealed that this snake consumed three white-tailed deer (Odocoileus virginianus). This is the first report of an invasive Burmese python containing the remains of multiple white-tailed deer in its gut. Because the largest snakes native to southern Florida are not capable of consuming even mid-sized mammals, pythons likely represent a novel predatory threat to white-tailed deer in these habitats. This work highlights the potential impact of this large-bodied invasive snake and supports the need for more work on invasive predator-native prey relationships.
Facultative thermogenesis during brooding is not the norm among pythons.
Brashears, Jake; DeNardo, Dale F
2015-08-01
Facultative thermogenesis is often attributed to pythons in general despite limited comparative data available for the family. While all species within Pythonidae brood their eggs, only two species are known to produce heat to enhance embryonic thermal regulation. By contrast, a few python species have been reported to have insignificant thermogenic capabilities. To provide insight into potential phylogenetic, morphological, and ecological factors influencing thermogenic capability among pythons, we measured metabolic rates and clutch-environment temperature differentials at two environmental temperatures-python preferred brooding temperature (31.5 °C) and a sub-optimal temperature (25.5 °C)-in six species of pythons, including members of two major phylogenetic branches currently devoid of data on the subject. We found no evidence of facultative thermogenesis in five species: Aspidites melanocephalus, A. ramsayi, Morelia viridis, M. spilota cheynei, and Python regius. However, we found that Bothrochilus boa had a thermal metabolic sensitivity indicative of facultative thermogenesis (i.e., a higher metabolic rate at the lower temperature). However, its metabolic rate was quite low and technical challenges prevented us from measuring temperature differential to make conclusions about facultative endothermy in this species. Regardless, our data combined with existing literature demonstrate that facultative thermogenesis is not as widespread among pythons as previously thought.
Falk, Bryan; Reed, Robert N.
2015-01-01
Molecular approaches to prey identification are increasingly useful in elucidating predator–prey relationships, and we aimed to investigate the feasibility of these methods to document the species identities of prey consumed by invasive Burmese pythons in Florida. We were particularly interested in the diet of young snakes, because visual identification of prey from this size class has proven difficult. We successfully extracted DNA from the gastrointestinal contents of 43 young pythons, as well as from several control samples, and attempted amplification of DNA mini-barcodes, a 130-bp region of COX1. Using a PNA clamp to exclude python DNA, we found that prey DNA was not present in sufficient quality for amplification of this locus in 86% of our samples. All samples from the GI tracts of young pythons contained only hair, and the six samples we were able to identify to species were hispid cotton rats. This suggests that young Burmese pythons prey predominantly on small mammals and that prey diversity among snakes of this size class is low. We discuss prolonged gastrointestinal transit times and extreme gastric breakdown as possible causes of DNA degradation that limit the success of a molecular approach to prey identification in Burmese pythons
Practical Approach for Hyperspectral Image Processing in Python
NASA Astrophysics Data System (ADS)
Annala, L.; Eskelinen, M. A.; Hämäläinen, J.; Riihinen, A.; Pölönen, I.
2018-04-01
Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor and a spectral index library. They also fulfill our goal of easy and agile data processing. In this paper we will present our processing pipeline and demonstrate it in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Daniel; Vesselinov, Velimir V.
MADSpython (Model analysis and decision support tools in Python) is a code in Python that streamlines the process of using data and models for analysis and decision support using the code MADS. MADS is open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11-035). MADS can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. The Python scripts in MADSpython facilitate the generation of input and output file needed by MADS as wells as the external simulators which include FEHM and PFLOTRAN. MADSpython enables a number of data-more » and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. MADSpython will be released under GPL V3 license. MADSpython will be distributed as a Git repo at gitlab.com and github.com. MADSpython manual and documentation will be posted at http://madspy.lanl.gov.« less
Using Python as a first programming environment for computational physics in developing countries
NASA Astrophysics Data System (ADS)
Akpojotor, Godfrey; Ehwerhemuepha, Louis; Echenim, Myron; Akpojotor, Famous
2011-03-01
Python unique features such its interpretative, multiplatform and object oriented nature as well as being a free and open source software creates the possibility that any user connected to the internet can download the entire package into any platform, install it and immediately begin to use it. Thus Python is gaining reputation as a preferred environment for introducing students and new beginners to programming. Therefore in Africa, the Python African Tour project has been launched and we are coordinating its use in computational science. We examine here the challenges and prospects of using Python for computational physics (CP) education in developing countries (DC). Then we present our project on using Python to simulate and aid the learning of laboratory experiments illustrated here by modeling of the simple pendulum and also to visualize phenomena in physics illustrated here by demonstrating the wave motion of a particle in a varying potential. This project which is to train both the teachers and our students on CP using Python can easily be adopted in other DC.
Psyplot: Visualizing rectangular and triangular Climate Model Data with Python
NASA Astrophysics Data System (ADS)
Sommer, Philipp
2016-04-01
The development and use of climate models often requires the visualization of geo-referenced data. Creating visualizations should be fast, attractive, flexible, easily applicable and easily reproducible. There is a wide range of software tools available for visualizing raster data, but they often are inaccessible to many users (e.g. because they are difficult to use in a script or have low flexibility). In order to facilitate easy visualization of geo-referenced data, we developed a new framework called "psyplot," which can aid earth system scientists with their daily work. It is purely written in the programming language Python and primarily built upon the python packages matplotlib, cartopy and xray. The package can visualize data stored on the hard disk (e.g. NetCDF, GeoTIFF, any other file format supported by the xray package), or directly from the memory or Climate Data Operators (CDOs). Furthermore, data can be visualized on a rectangular grid (following or not following the CF Conventions) and on a triangular grid (following the CF or UGRID Conventions). Psyplot visualizes 2D scalar and vector fields, enabling the user to easily manage and format multiple plots at the same time, and to export the plots into all common picture formats and movies covered by the matplotlib package. The package can currently be used in an interactive python session or in python scripts, and will soon be developed for use with a graphical user interface (GUI). Finally, the psyplot framework enables flexible configuration, allows easy integration into other scripts that uses matplotlib, and provides a flexible foundation for further development.
Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T
2015-04-30
New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Python is a high-level scripting language that is becoming increasingly popular for scientific computing. This all-day workshop is designed to introduce the basics of Python programming to ecologists. Some scripting/programming experience is recommended (e.g. familiarity with R)....
Automating Disk Forensic Processing with SleuthKit, XML and Python
2009-05-01
1 Automating Disk Forensic Processing with SleuthKit, XML and Python Simson L. Garfinkel Abstract We have developed a program called fiwalk which...files themselves. We show how it is relatively simple to create automated disk forensic applications using a Python module we have written that reads...software that the portable device may contain. Keywords: Computer Forensics; XML; Sleuth Kit; Python I. INTRODUCTION In recent years we have found many
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-01-01
Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-03-09
Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.
NASA Astrophysics Data System (ADS)
Bogdanchikov, A.; Zhaparov, M.; Suliyev, R.
2013-04-01
Today we have a lot of programming languages that can realize our needs, but the most important question is how to teach programming to beginner students. In this paper we suggest using Python for this purpose, because it is a programming language that has neatly organized syntax and powerful tools to solve any task. Moreover it is very close to simple math thinking. Python is chosen as a primary programming language for freshmen in most of leading universities. Writing code in python is easy. In this paper we give some examples of program codes written in Java, C++ and Python language, and we make a comparison between them. Firstly, this paper proposes advantages of Python language in relation to C++ and JAVA. Then it shows the results of a comparison of short program codes written in three different languages, followed by a discussion on how students understand programming. Finally experimental results of students' success in programming courses are shown.
Esquerré, Damien; Sherratt, Emma; Keogh, J Scott
2017-12-01
Ontogenetic allometry, how species change with size through their lives, and heterochony, a decoupling between shape, size, and age, are major contributors to biological diversity. However, macroevolutionary allometric and heterochronic trends remain poorly understood because previous studies have focused on small groups of closely related species. Here, we focus on testing hypotheses about the evolution of allometry and how allometry and heterochrony drive morphological diversification at the level of an entire species-rich and diverse clade. Pythons are a useful system due to their remarkably diverse and well-adapted phenotypes and extreme size disparity. We collected detailed phenotype data on 40 of the 44 species of python from 1191 specimens. We used a suite of analyses to test for shifts in allometric trajectories that modify morphological diversity. Heterochrony is the main driver of initial divergence within python clades, and shifts in the slopes of allometric trajectories make exploration of novel phenotypes possible later in divergence history. We found that allometric coefficients are highly evolvable and there is an association between ontogenetic allometry and ecology, suggesting that allometry is both labile and adaptive rather than a constraint on possible phenotypes. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
DREAMTools: a Python package for scoring collaborative challenges
Cokelaer, Thomas; Bansal, Mukesh; Bare, Christopher; Bilal, Erhan; Bot, Brian M.; Chaibub Neto, Elias; Eduati, Federica; de la Fuente, Alberto; Gönen, Mehmet; Hill, Steven M.; Hoff, Bruce; Karr, Jonathan R.; Küffner, Robert; Menden, Michael P.; Meyer, Pablo; Norel, Raquel; Pratap, Abhishek; Prill, Robert J.; Weirauch, Matthew T.; Costello, James C.; Stolovitzky, Gustavo; Saez-Rodriguez, Julio
2016-01-01
DREAM challenges are community competitions designed to advance computational methods and address fundamental questions in system biology and translational medicine. Each challenge asks participants to develop and apply computational methods to either predict unobserved outcomes or to identify unknown model parameters given a set of training data. Computational methods are evaluated using an automated scoring metric, scores are posted to a public leaderboard, and methods are published to facilitate community discussions on how to build improved methods. By engaging participants from a wide range of science and engineering backgrounds, DREAM challenges can comparatively evaluate a wide range of statistical, machine learning, and biophysical methods. Here, we describe DREAMTools, a Python package for evaluating DREAM challenge scoring metrics. DREAMTools provides a command line interface that enables researchers to test new methods on past challenges, as well as a framework for scoring new challenges. As of March 2016, DREAMTools includes more than 80% of completed DREAM challenges. DREAMTools complements the data, metadata, and software tools available at the DREAM website http://dreamchallenges.org and on the Synapse platform at https://www.synapse.org. Availability: DREAMTools is a Python package. Releases and documentation are available at http://pypi.python.org/pypi/dreamtools. The source code is available at http://github.com/dreamtools/dreamtools. PMID:27134723
Vectorized algorithms for spiking neural network simulation.
Brette, Romain; Goodman, Dan F M
2011-06-01
High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.
Early Market Site Identification Data
Levi Kilcher
2016-04-01
This data was compiled for the 'Early Market Opportunity Hot Spot Identification' project. The data and scripts included were used in the 'MHK Energy Site Identification and Ranking Methodology' Reports (Part I: Wave, NREL Report #66038; Part II: Tidal, NREL Report #66079). The Python scripts will generate a set of results--based on the Excel data files--some of which were described in the reports. The scripts depend on the 'score_site' package, and the score site package depends on a number of standard Python libraries (see the score_site install instructions).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene
InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less
ObsPy: A Python toolbox for seismology - Current state, applications, and ecosystem around it
NASA Astrophysics Data System (ADS)
Lecocq, Thomas; Megies, Tobias; Krischer, Lion; Sales de Andrade, Elliott; Barsch, Robert; Beyreuther, Moritz
2016-04-01
ObsPy (http://www.obspy.org) is a community-driven, open-source project offering a bridge for seismology into the scientific Python ecosystem. It provides * read and write support for essentially all commonly used waveform, station, and event metadata formats with a unified interface, * a comprehensive signal processing toolbox tuned to the needs of seismologists, * integrated access to all large data centers, web services and databases, and * convenient wrappers to third party codes like libmseed and evalresp. Python, in contrast to many other languages and tools, is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often has to be translated to stable and production ready environments. It furthermore offers many freely available high quality scientific modules covering most needs in developing scientific software. ObsPy has been in constant development for more than 5 years and nowadays enjoys a large rate of adoption in the community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it. This contribution will give a short introduction and overview of ObsPy and highlight a number of use cases and software built around it. We will furthermore discuss the issue of sustainability of scientific software.
ObsPy: A Python toolbox for seismology - Current state, applications, and ecosystem around it
NASA Astrophysics Data System (ADS)
Krischer, L.; Megies, T.; Sales de Andrade, E.; Barsch, R.; Beyreuther, M.
2015-12-01
ObsPy (http://www.obspy.org) is a community-driven, open-source project offering a bridge for seismology into the scientific Python ecosystem. It provides read and write support for essentially all commonly used waveform, station, and event metadata formats with a unified interface, a comprehensive signal processing toolbox tuned to the needs of seismologists, integrated access to all large data centers, web services and databases, and convenient wrappers to third party codes like libmseed and evalresp. Python, in contrast to many other languages and tools, is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often has to be translated to stable and production ready environments. It furthermore offers many freely available high quality scientific modules covering most needs in developing scientific software.ObsPy has been in constant development for more than 5 years and nowadays enjoys a large rate of adoption in the community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it.This contribution will give a short introduction and overview of ObsPy and highlight a number of us cases and software built around it. We will furthermore discuss the issue of sustainability of scientific software.
Near-Infrared Neuroimaging with NinPy
Strangman, Gary E.; Zhang, Quan; Zeffiro, Thomas
2009-01-01
There has been substantial recent growth in the use of non-invasive optical brain imaging in studies of human brain function in health and disease. Near-infrared neuroimaging (NIN) is one of the most promising of these techniques and, although NIN hardware continues to evolve at a rapid pace, software tools supporting optical data acquisition, image processing, statistical modeling, and visualization remain less refined. Python, a modular and computationally efficient development language, can support functional neuroimaging studies of diverse design and implementation. In particular, Python's easily readable syntax and modular architecture allow swift prototyping followed by efficient transition to stable production systems. As an introduction to our ongoing efforts to develop Python software tools for structural and functional neuroimaging, we discuss: (i) the role of non-invasive diffuse optical imaging in measuring brain function, (ii) the key computational requirements to support NIN experiments, (iii) our collection of software tools to support NIN, called NinPy, and (iv) future extensions of these tools that will allow integration of optical with other structural and functional neuroimaging data sources. Source code for the software discussed here will be made available at www.nmr.mgh.harvard.edu/Neural_SystemsGroup/software.html. PMID:19543449
PyEEG: an open source Python module for EEG/MEG feature extraction.
Bao, Forrest Sheng; Liu, Xin; Zhang, Christina
2011-01-01
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.
PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction
Bao, Forrest Sheng; Liu, Xin; Zhang, Christina
2011-01-01
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582
Transcriptome analysis of the response of Burmese python to digestion
Sanggaard, Kristian Wejse; Schauser, Leif; Lauridsen, Sanne Enok; Enghild, Jan J.
2017-01-01
Abstract Exceptional and extreme feeding behaviour makes the Burmese python (Python bivittatus) an interesting model to study physiological remodelling and metabolic adaptation in response to refeeding after prolonged starvation. In this study, we used transcriptome sequencing of 5 visceral organs during fasting as well as 24 hours and 48 hours after ingestion of a large meal to unravel the postprandial changes in Burmese pythons. We first used the pooled data to perform a de novo assembly of the transcriptome and supplemented this with a proteomic survey of enzymes in the plasma and gastric fluid. We constructed a high-quality transcriptome with 34 423 transcripts, of which 19 713 (57%) were annotated. Among highly expressed genes (fragments per kilo base per million sequenced reads > 100 in 1 tissue), we found that the transition from fasting to digestion was associated with differential expression of 43 genes in the heart, 206 genes in the liver, 114 genes in the stomach, 89 genes in the pancreas, and 158 genes in the intestine. We interrogated the function of these genes to test previous hypotheses on the response to feeding. We also used the transcriptome to identify 314 secreted proteins in the gastric fluid of the python. Digestion was associated with an upregulation of genes related to metabolic processes, and translational changes therefore appear to support the postprandial rise in metabolism. We identify stomach-related proteins from a digesting individual and demonstrate that the sensitivity of modern liquid chromatography/tandem mass spectrometry equipment allows the identification of gastric juice proteins that are present during digestion. PMID:28873961
Transcriptome analysis of the response of Burmese python to digestion.
Duan, Jinjie; Sanggaard, Kristian Wejse; Schauser, Leif; Lauridsen, Sanne Enok; Enghild, Jan J; Schierup, Mikkel Heide; Wang, Tobias
2017-08-01
Exceptional and extreme feeding behaviour makes the Burmese python (Python bivittatus) an interesting model to study physiological remodelling and metabolic adaptation in response to refeeding after prolonged starvation. In this study, we used transcriptome sequencing of 5 visceral organs during fasting as well as 24 hours and 48 hours after ingestion of a large meal to unravel the postprandial changes in Burmese pythons. We first used the pooled data to perform a de novo assembly of the transcriptome and supplemented this with a proteomic survey of enzymes in the plasma and gastric fluid. We constructed a high-quality transcriptome with 34 423 transcripts, of which 19 713 (57%) were annotated. Among highly expressed genes (fragments per kilo base per million sequenced reads > 100 in 1 tissue), we found that the transition from fasting to digestion was associated with differential expression of 43 genes in the heart, 206 genes in the liver, 114 genes in the stomach, 89 genes in the pancreas, and 158 genes in the intestine. We interrogated the function of these genes to test previous hypotheses on the response to feeding. We also used the transcriptome to identify 314 secreted proteins in the gastric fluid of the python. Digestion was associated with an upregulation of genes related to metabolic processes, and translational changes therefore appear to support the postprandial rise in metabolism. We identify stomach-related proteins from a digesting individual and demonstrate that the sensitivity of modern liquid chromatography/tandem mass spectrometry equipment allows the identification of gastric juice proteins that are present during digestion. © The Authors 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Barrett, P. E.
This BoF will be chaired by Paul Barrett and will begin with an introduction to Python in astronomy, be followed by reports of current Python projects, and conclude with a discussion about the current state of Python in astronomy. The introduction will give a brief overview of the language, highlighting modules, resources, and aspects of the language that are important to scientific programming and astronomical data analysis. The closing discussion will provide an opportunity for questions and comments.
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
PyGlobal: A toolkit for automated compilation of DFT-based descriptors.
Nath, Shilpa R; Kurup, Sudheer S; Joshi, Kaustubh A
2016-06-15
Density Functional Theory (DFT)-based Global reactivity descriptor calculations have emerged as powerful tools for studying the reactivity, selectivity, and stability of chemical and biological systems. A Python-based module, PyGlobal has been developed for systematically parsing a typical Gaussian outfile and extracting the relevant energies of the HOMO and LUMO. Corresponding global reactivity descriptors are further calculated and the data is saved into a spreadsheet compatible with applications like Microsoft Excel and LibreOffice. The efficiency of the module has been accounted by measuring the time interval for randomly selected Gaussian outfiles for 1000 molecules. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
77 FR 40865 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-11
...://www.regulations.gov as they are received without change, including any personal identifiers or contact.... NM01500-13 System Name: Naval Postgraduate School Education Management System (PYTHON) System Location: U.S. Naval Postgraduate School (NPS), 1 University Circle, Monterey, CA 93943-5100. Categories of...
TriBITS (Tribal Build, Integrate, and Test System)
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-05-16
TriBITS is a configuration, build, test, and reporting system that uses the Kitware open-source CMake/CTest/CDash system. TriBITS contains a number of custom CMake/CTest scripts and python scripts that extend the functionality of the out-of-the-box CMake/CTest/CDash system.
Real-time Shakemap implementation in Austria
NASA Astrophysics Data System (ADS)
Weginger, Stefan; Jia, Yan; Papi Isaba, Maria; Horn, Nikolaus
2017-04-01
ShakeMaps provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. They are automatically generated within a few minutes after occurrence of an earthquake. We tested and included the USGS ShakeMap 4.0 (experimental code) based on python in the Antelope real-time system with local modified GMPE and Site Effects based on the conditions in Austria. The ShakeMaps are provided in terms of Intensity, PGA, PGV and PSA. Future presentation of ShakeMap contour lines and Ground Motion Parameter with interactive maps and data exchange over Web-Services are shown.
Urbanization may limit impacts of an invasive predator on native mammal diversity
Reichert, Brian E.; Sovie, Adia R.; Udell, Brad J.; Hart, Kristen M.; Borkhataria, Rena R.; Bonneau, Mathieu; Reed, Robert; McCleery, Robert A.
2017-01-01
AimOur understanding of the effects of invasive species on faunal diversity is limited in part because invasions often occur in modified landscapes where other drivers of community diversity can exacerbate or reduce the net impacts of an invader. Furthermore, rigorous assessments of the effects of invasive species on native communities that account for variation in sampling, species-specific detection and occurrence of rare species are lacking. Invasive Burmese pythons (Python molurus bivittatus) may be causing declines in medium- to large-sized mammals throughout the Greater Everglades Ecosystem (GEE); however, other factors such as urbanization, habitat changes and drastic alteration in water flow may also be influential in structuring mammal communities. The aim of this study was to gain an understanding of how mammal communities simultaneously facing invasive predators and intensively human-altered landscapes are influenced by these drivers and their interactions.LocationFlorida, USA.MethodsWe used data from trail cameras and scat searches with a hierarchical community model that accounts for undetected species to determine the relative influence of introduced Burmese pythons, urbanization, local hydrology, habitat types and interactive effects between pythons and urbanization on mammal species occurrence, site-level species richness, and turnover.ResultsPython density had significant negative effects on all species except coyotes. Despite these negative effects, occurrence of some generalist species increased significantly near urban areas. At the community level, pythons had the greatest impact on species richness, while turnover was greatest along the urbanization gradient where communities were increasingly similar as distance to urbanization decreased.Main conclusionsWe found evidence for an antagonistic interaction between pythons and urbanization where the impacts of pythons were reduced near urban development. Python-induced changes to mammal communities may be mediated near urban development, but elsewhere in the GEE, pythons are likely causing a fundamental restructuring of the food web, declines in ecosystem function, and creating complex and unpredictable cascading effects.
Pythons metabolize prey to fuel the response to feeding.
Starck, J. Matthias; Moser, Patrick; Werner, Roland A.; Linke, Petra
2004-01-01
We investigated the energy source fuelling the post-feeding metabolic upregulation (specific dynamic action, SDA) in pythons (Python regius). Our goal was to distinguish between two alternatives: (i) snakes fuel SDA by metabolizing energy depots from their tissues; or (ii) snakes fuel SDA by metabolizing their prey. To characterize the postprandial response of pythons we used transcutaneous ultrasonography to measure organ-size changes and respirometry to record oxygen consumption. To discriminate unequivocally between the two hypotheses, we enriched mice (= prey) with the stable isotope of carbon (13C). For two weeks after feeding we quantified the CO2 exhaled by pythons and determined its isotopic 13C/12C signature. Ultrasonography and respirometry showed typical postprandial responses in pythons. After feeding, the isotope ratio of the exhaled breath changed rapidly to values that characterized enriched mouse tissue, followed by a very slow change towards less enriched values over a period of two weeks after feeding. We conclude that pythons metabolize their prey to fuel SDA. The slowly declining delta13C values indicate that less enriched tissues (bone, cartilage and collagen) from the mouse become available after several days of digestion. PMID:15255044
Hart, Kristen M.; Schofield, Pamela J.; Gregoire, Denise R.
2012-01-01
In a laboratory setting, we tested the ability of 24 non-native, wild-caught hatchling Burmese pythons (Python molurus bivittatus) collected in the Florida Everglades to survive when given water containing salt to drink. After a one-month acclimation period in the laboratory, we grouped snakes into three treatments, giving them access to water that was fresh (salinity of 0, control), brackish (salinity of 10), or full-strength sea water (salinity of 35). Hatchlings survived about one month at the highest marine salinity and about five months at the brackish-water salinity; no control animals perished during the experiment. These results are indicative of a "worst-case scenario", as in the laboratory we denied access to alternate fresh-water sources that may be accessible in the wild (e.g., through rainfall). Therefore, our results may underestimate the potential of hatchling pythons to persist in saline habitats in the wild. Because of the effect of different salinity regimes on survival, predictions of ultimate geographic expansion by non-native Burmese pythons that consider salt water as barriers to dispersal for pythons may warrant re-evaluation, especially under global climate change and associated sea-level-rise scenarios.
Hart, K.M.; Schofield, P.J.; Gregoire, D.R.
2012-01-01
In a laboratory setting, we tested the ability of 24 non-native, wild-caught hatchling Burmese pythons (Python molurus bivittatus) collected in the Florida Everglades to survive when given water containing salt to drink. After a one-month acclimation period in the laboratory, we grouped snakes into three treatments, giving them access to water that was fresh (salinity of 0, control), brackish (salinity of 10), or full-strength sea water (salinity of 35). Hatchlings survived about one month at the highest marine salinity and about five months at the brackish-water salinity; no control animals perished during the experiment. These results are indicative of a "worst-case scenario", as in the laboratory we denied access to alternate fresh-water sources that may be accessible in the wild (e.g., through rainfall). Therefore, our results may underestimate the potential of hatchling pythons to persist in saline habitats in the wild. Because of the effect of different salinity regimes on survival, predictions of ultimate geographic expansion by non-native Burmese pythons that consider salt water as barriers to dispersal for pythons may warrant re-evaluation, especially under global climate change and associated sea-level-rise scenarios. ?? 2011.
Visualizing complex (hydrological) systems with correlation matrices
NASA Astrophysics Data System (ADS)
Haas, J. C.
2016-12-01
When trying to understand or visualize the connections of different aspects of a complex system, this often requires deeper understanding to start with, or - in the case of geo data - complicated GIS software. To our knowledge, correlation matrices have rarely been used in hydrology (e.g. Stoll et al., 2011; van Loon and Laaha, 2015), yet they do provide an interesting option for data visualization and analysis. We present a simple, python based way - using a river catchment as an example - to visualize correlations and similarities in an easy and colorful way. We apply existing and easy to use python packages from various disciplines not necessarily linked to the Earth sciences and can thus quickly show how different aquifers work or react, and identify outliers, enabling this system to also be used for quality control of large datasets. Going beyond earlier work, we add a temporal and spatial element, enabling us to visualize how a system reacts to local phenomena such as for example a river, or changes over time, by visualizing the passing of time in an animated movie. References: van Loon, A.F., Laaha, G.: Hydrological drought severity explained by climate and catchment characteristics, Journal of Hydrology 526, 3-14, 2015, Drought processes, modeling, and mitigation Stoll, S., Hendricks Franssen, H. J., Barthel, R., Kinzelbach, W.: What can we learn from long-term groundwater data to improve climate change impact studies?, Hydrology and Earth System Sciences 15(12), 3861-3875, 2011
Motion control system of MAX IV Laboratory soft x-ray beamlines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjöblom, Peter, E-mail: peter.sjoblom@maxlab.lu.se; Lindberg, Mirjam, E-mail: mirjam.lindberg@maxlab.lu.se; Forsberg, Johan, E-mail: johan.forsberg@maxlab.lu.se
2016-07-27
At the MAX IV Laboratory, five new soft x-ray beamlines are under development. The first is Species and it will be used to develop and set the standard of the control system, which will be common across the facility. All motion axes at MAX IV will be motorized using stepper motors steered by the IcePAP motion controller and a mixture of absolute and incremental encoders following a predefined coordinate system. The control system software is built in Tango and uses the Python-based Sardana framework. The user controls the entire beamline through a synoptic overview and Sardana is used to runmore » the scans.« less
PyMICE: APython library for analysis of IntelliCage data.
Dzik, Jakub M; Puścian, Alicja; Mijakowska, Zofia; Radwanska, Kasia; Łęski, Szymon
2018-04-01
IntelliCage is an automated system for recording the behavior of a group of mice housed together. It produces rich, detailed behavioral data calling for new methods and software for their analysis. Here we present PyMICE, a free and open-source library for analysis of IntelliCage data in the Python programming language. We describe the design and demonstrate the use of the library through a series of examples. PyMICE provides easy and intuitive access to IntelliCage data, and thus facilitates the possibility of using numerous other Python scientific libraries to form a complete data analysis workflow.
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem
Accelerating wave propagation modeling in the frequency domain using Python
NASA Astrophysics Data System (ADS)
Jo, Sang Hoon; Park, Min Jun; Ha, Wan Soo
2017-04-01
Python is a dynamic programming language adopted in many science and engineering areas. We used Python to simulate wave propagation in the frequency domain. We used the Pardiso matrix solver to solve the impedance matrix of the wave equation. Numerical examples shows that Python with numpy consumes longer time to construct the impedance matrix using the finite element method when compared with Fortran; however we could reduce the time significantly to be comparable to that of Fortran using a simple Numba decorator.
HOPE: Just-in-time Python compiler for astrophysical computations
NASA Astrophysics Data System (ADS)
Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre
2014-11-01
HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.
MADANALYSIS 5, a user-friendly framework for collider phenomenology
NASA Astrophysics Data System (ADS)
Conte, Eric; Fuks, Benjamin; Serret, Guillaume
2013-01-01
We present MADANALYSIS 5, a new framework for phenomenological investigations at particle colliders. Based on a C++ kernel, this program allows us to efficiently perform, in a straightforward and user-friendly fashion, sophisticated physics analyses of event files such as those generated by a large class of Monte Carlo event generators. MADANALYSIS 5 comes with two modes of running. The first one, easier to handle, uses the strengths of a powerful PYTHON interface in order to implement physics analyses by means of a set of intuitive commands. The second one requires one to implement the analyses in the C++ programming language, directly within the core of the analysis framework. This opens unlimited possibilities concerning the level of complexity which can be reached, being only limited by the programming skills and the originality of the user. Program summaryProgram title: MadAnalysis 5 Catalogue identifier: AENO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Permission to use, copy, modify and distribute this program is granted under the terms of the GNU General Public License. No. of lines in distributed program, including test data, etc.: 31087 No. of bytes in distributed program, including test data, etc.: 399105 Distribution format: tar.gz Programming language: PYTHON, C++. Computer: All platforms on which Python version 2.7, Root version 5.27 and the g++ compiler are available. Compatibility with newer versions of these programs is also ensured. However, the Python version must be below version 3.0. Operating system: Unix, Linux and Mac OS operating systems on which the above-mentioned versions of Python and Root, as well as g++, are available. Classification: 11.1. External routines: ROOT (http://root.cern.ch/drupal/) Nature of problem: Implementing sophisticated phenomenological analyses in high-energy physics through a flexible, efficient and straightforward fashion, starting from event files such as those produced by Monte Carlo event generators. The event files can have been matched or not to parton-showering and can have been processed or not by a (fast) simulation of a detector. According to the sophistication level of the event files (parton-level, hadron-level, reconstructed-level), one must note that several input formats are possible. Solution method: We implement an interface allowing the production of predefined as well as user-defined histograms for a large class of kinematical distributions after applying a set of event selection cuts specified by the user. This therefore allows us to devise robust and novel search strategies for collider experiments, such as those currently running at the Large Hadron Collider at CERN, in a very efficient way. Restrictions: Unsupported event file format. Unusual features: The code is fully based on object representations for events, particles, reconstructed objects and cuts, which facilitates the implementation of an analysis. Running time: It depends on the purposes of the user and on the number of events to process. It varies from a few seconds to the order of the minute for several millions of events.
System for Automated Geoscientific Analyses (SAGA) v. 2.1.4
NASA Astrophysics Data System (ADS)
Conrad, O.; Bechtel, B.; Bock, M.; Dietrich, H.; Fischer, E.; Gerlitz, L.; Wehberg, J.; Wichmann, V.; Böhner, J.
2015-02-01
The System for Automated Geoscientific Analyses (SAGA) is an open-source Geographic Information System (GIS), mainly licensed under the GNU General Public License. Since its first release in 2004, SAGA has rapidly developed from a specialized tool for digital terrain analysis to a comprehensive and globally established GIS platform for scientific analysis and modeling. SAGA is coded in C++ in an object oriented design and runs under several operating systems including Windows and Linux. Key functional features of the modular organized software architecture comprise an application programming interface for the development and implementation of new geoscientific methods, an easily approachable graphical user interface with many visualization options, a command line interpreter, and interfaces to scripting and low level programming languages like R and Python. The current version 2.1.4 offers more than 700 tools, which are implemented in dynamically loadable libraries or shared objects and represent the broad scopes of SAGA in numerous fields of geoscientific endeavor and beyond. In this paper, we inform about the system's architecture, functionality, and its current state of development and implementation. Further, we highlight the wide spectrum of scientific applications of SAGA in a review of published studies with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.
System for Automated Geoscientific Analyses (SAGA) v. 2.1.4
NASA Astrophysics Data System (ADS)
Conrad, O.; Bechtel, B.; Bock, M.; Dietrich, H.; Fischer, E.; Gerlitz, L.; Wehberg, J.; Wichmann, V.; Böhner, J.
2015-07-01
The System for Automated Geoscientific Analyses (SAGA) is an open source geographic information system (GIS), mainly licensed under the GNU General Public License. Since its first release in 2004, SAGA has rapidly developed from a specialized tool for digital terrain analysis to a comprehensive and globally established GIS platform for scientific analysis and modeling. SAGA is coded in C++ in an object oriented design and runs under several operating systems including Windows and Linux. Key functional features of the modular software architecture comprise an application programming interface for the development and implementation of new geoscientific methods, a user friendly graphical user interface with many visualization options, a command line interpreter, and interfaces to interpreted languages like R and Python. The current version 2.1.4 offers more than 600 tools, which are implemented in dynamically loadable libraries or shared objects and represent the broad scopes of SAGA in numerous fields of geoscientific endeavor and beyond. In this paper, we inform about the system's architecture, functionality, and its current state of development and implementation. Furthermore, we highlight the wide spectrum of scientific applications of SAGA in a review of published studies, with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.
Programming an offline-analyzer of motor imagery signals via python language.
Alonso-Valerdi, Luz María; Sepulveda, Francisco
2011-01-01
Brain Computer Interface (BCI) systems control the user's environment via his/her brain signals. Brain signals related to motor imagery (MI) have become a widespread method employed by the BCI community. Despite the large number of references describing the MI signal treatment, there is not enough information related to the available programming languages that could be suitable to develop a specific-purpose MI-based BCI. The present paper describes the development of an offline-analysis system based on MI-EEG signals via open-source programming languages, and the assessment of the system using electrical activity recorded from three subjects. The analyzer recognized at least 63% of the MI signals corresponding to three classes. The results of the offline analysis showed a promising performance considering that the subjects have never undergone MI trainings.
MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories.
McGibbon, Robert T; Beauchamp, Kyle A; Harrigan, Matthew P; Klein, Christoph; Swails, Jason M; Hernández, Carlos X; Schwantes, Christian R; Wang, Lee-Ping; Lane, Thomas J; Pande, Vijay S
2015-10-20
As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
CMCpy: Genetic Code-Message Coevolution Models in Python
Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367
Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Lewis, Robert R.
2011-11-30
Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less
MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories
McGibbon, Robert T.; Beauchamp, Kyle A.; Harrigan, Matthew P.; Klein, Christoph; Swails, Jason M.; Hernández, Carlos X.; Schwantes, Christian R.; Wang, Lee-Ping; Lane, Thomas J.; Pande, Vijay S.
2015-01-01
As molecular dynamics (MD) simulations continue to evolve into powerful computational tools for studying complex biomolecular systems, the necessity of flexible and easy-to-use software tools for the analysis of these simulations is growing. We have developed MDTraj, a modern, lightweight, and fast software package for analyzing MD simulations. MDTraj reads and writes trajectory data in a wide variety of commonly used formats. It provides a large number of trajectory analysis capabilities including minimal root-mean-square-deviation calculations, secondary structure assignment, and the extraction of common order parameters. The package has a strong focus on interoperability with the wider scientific Python ecosystem, bridging the gap between MD data and the rapidly growing collection of industry-standard statistical analysis and visualization tools in Python. MDTraj is a powerful and user-friendly software package that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python. PMID:26488642
OpenSeesPy: Python library for the OpenSees finite element framework
NASA Astrophysics Data System (ADS)
Zhu, Minjie; McKenna, Frank; Scott, Michael H.
2018-01-01
OpenSees, an open source finite element software framework, has been used broadly in the earthquake engineering community for simulating the seismic response of structural and geotechnical systems. The framework allows users to perform finite element analysis with a scripting language and for developers to create both serial and parallel finite element computer applications as interpreters. For the last 15 years, Tcl has been the primary scripting language to which the model building and analysis modules of OpenSees are linked. To provide users with different scripting language options, particularly Python, the OpenSees interpreter interface was refactored to provide multi-interpreter capabilities. This refactoring, resulting in the creation of OpenSeesPy as a Python module, is accomplished through an abstract interface for interpreter calls with concrete implementations for different scripting languages. Through this approach, users are able to develop applications that utilize the unique features of several scripting languages while taking advantage of advanced finite element analysis models and algorithms.
Cosmic Microwave Background Anisotropy Measurement from Python V
NASA Astrophysics Data System (ADS)
Coble, K.; Dodelson, S.; Dragovan, M.; Ganga, K.; Knox, L.; Kovac, J.; Ratra, B.; Souradeep, T.
2003-02-01
We analyze observations of the microwave sky made with the Python experiment in its fifth year of operation at the Amundsen-Scott South Pole Station in Antarctica. After modeling the noise and constructing a map, we extract the cosmic signal from the data. We simultaneously estimate the angular power spectrum in eight bands ranging from large (l~40) to small (l~260) angular scales, with power detected in the first six bands. There is a significant rise in the power spectrum from large to smaller (l~200) scales, consistent with that expected from acoustic oscillations in the early universe. We compare this Python V map to a map made from data taken in the third year of Python. Python III observations were made at a frequency of 90 GHz and covered a subset of the region of the sky covered by Python V observations, which were made at 40 GHz. Good agreement is obtained both visually (with a filtered version of the map) and via a likelihood ratio test.
PyMUS: Python-Based Simulation Software for Virtual Experiments on Motor Unit System
Kim, Hojeong; Kim, Minjung
2018-01-01
We constructed a physiologically plausible computationally efficient model of a motor unit and developed simulation software that allows for integrative investigations of the input–output processing in the motor unit system. The model motor unit was first built by coupling the motoneuron model and muscle unit model to a simplified axon model. To build the motoneuron model, we used a recently reported two-compartment modeling approach that accurately captures the key cell-type-related electrical properties under both passive conditions (somatic input resistance, membrane time constant, and signal attenuation properties between the soma and the dendrites) and active conditions (rheobase current and afterhyperpolarization duration at the soma and plateau behavior at the dendrites). To construct the muscle unit, we used a recently developed muscle modeling approach that reflects the experimentally identified dependencies of muscle activation dynamics on isometric, isokinetic and dynamic variation in muscle length over a full range of stimulation frequencies. Then, we designed the simulation software based on the object-oriented programing paradigm and developed the software using open-source Python language to be fully operational using graphical user interfaces. Using the developed software, separate simulations could be performed for a single motoneuron, muscle unit and motor unit under a wide range of experimental input protocols, and a hierarchical analysis could be performed from a single channel to the entire system behavior. Our model motor unit and simulation software may represent efficient tools not only for researchers studying the neural control of force production from a cellular perspective but also for instructors and students in motor physiology classroom settings. PMID:29695959
PyMUS: Python-Based Simulation Software for Virtual Experiments on Motor Unit System.
Kim, Hojeong; Kim, Minjung
2018-01-01
We constructed a physiologically plausible computationally efficient model of a motor unit and developed simulation software that allows for integrative investigations of the input-output processing in the motor unit system. The model motor unit was first built by coupling the motoneuron model and muscle unit model to a simplified axon model. To build the motoneuron model, we used a recently reported two-compartment modeling approach that accurately captures the key cell-type-related electrical properties under both passive conditions (somatic input resistance, membrane time constant, and signal attenuation properties between the soma and the dendrites) and active conditions (rheobase current and afterhyperpolarization duration at the soma and plateau behavior at the dendrites). To construct the muscle unit, we used a recently developed muscle modeling approach that reflects the experimentally identified dependencies of muscle activation dynamics on isometric, isokinetic and dynamic variation in muscle length over a full range of stimulation frequencies. Then, we designed the simulation software based on the object-oriented programing paradigm and developed the software using open-source Python language to be fully operational using graphical user interfaces. Using the developed software, separate simulations could be performed for a single motoneuron, muscle unit and motor unit under a wide range of experimental input protocols, and a hierarchical analysis could be performed from a single channel to the entire system behavior. Our model motor unit and simulation software may represent efficient tools not only for researchers studying the neural control of force production from a cellular perspective but also for instructors and students in motor physiology classroom settings.
NASA Astrophysics Data System (ADS)
Moulton, J. D.; Steefel, C. I.; Yabusaki, S.; Castleton, K.; Scheibe, T. D.; Keating, E. H.; Freedman, V. L.
2013-12-01
The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments use a graded and iterative approach, beginning with simplified highly abstracted models, and adding geometric and geologic complexity as understanding is gained. To build confidence in this assessment capability, extensive testing of the underlying tools is needed. Since the tools themselves, such as the subsurface flow and reactive-transport simulator, Amanzi, are under active development, testing must be both hierarchical and highly automated. In this presentation we show how we have met these requirements, by leveraging the python-based open-source documentation system called Sphinx with several other open-source tools. Sphinx builds on the reStructured text tool docutils, with important extensions that include high-quality formatting of equations, and integrated plotting through matplotlib. This allows the documentation, as well as the input files for tests, benchmark and tutorial problems, to be maintained with the source code under a version control system. In addition, it enables developers to build documentation in several different formats (e.g., html and pdf) from a single source. We will highlight these features, and discuss important benefits of this approach for Amanzi. In addition, we'll show that some of ASCEM's other tools, such as the sampling provided by the Uncertainty Quantification toolset, are naturally leveraged to enable more comprehensive testing. Finally, we will highlight the integration of this hiearchical testing and documentation framework with our build system and tools (CMake, CTest, and CDash).
ObsPy: A Python Toolbox for Seismology
NASA Astrophysics Data System (ADS)
Wassermann, J. M.; Krischer, L.; Megies, T.; Barsch, R.; Beyreuther, M.
2013-12-01
Python combines the power of a full-blown programming language with the flexibility and accessibility of an interactive scripting language. Its extensive standard library and large variety of freely available high quality scientific modules cover most needs in developing scientific processing workflows. ObsPy is a community-driven, open-source project extending Python's capabilities to fit the specific needs that arise when working with seismological data. It a) comes with a continuously growing signal processing toolbox that covers most tasks common in seismological analysis, b) provides read and write support for many common waveform, station and event metadata formats and c) enables access to various data centers, webservices and databases to retrieve waveform data and station/event metadata. In combination with mature and free Python packages like NumPy, SciPy, Matplotlib, IPython, Pandas, lxml, and PyQt, ObsPy makes it possible to develop complete workflows in Python, ranging from reading locally stored data or requesting data from one or more different data centers via signal analysis and data processing to visualization in GUI and web applications, output of modified/derived data and the creation of publication-quality figures. All functionality is extensively documented and the ObsPy Tutorial and Gallery give a good impression of the wide range of possible use cases. ObsPy is tested and running on Linux, OS X and Windows and comes with installation routines for these systems. ObsPy is developed in a test-driven approach and is available under the LGPLv3 open source licence. Users are welcome to request help, report bugs, propose enhancements or contribute code via either the user mailing list or the project page on GitHub.
ATLAS tile calorimeter cesium calibration control and analysis software
NASA Astrophysics Data System (ADS)
Solovyanov, O.; Solodkov, A.; Starchenko, E.; Karyukhin, A.; Isaev, A.; Shalanda, N.
2008-07-01
An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.
System for NIS Forecasting Based on Ensembles Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-01-02
BMA-NIS is a package/library designed to be called by a script (e.g. Perl or Python). The software itself is written in the language of R. The software assists electric power delivery systems in planning resource availability and demand, based on historical data and current data variables. Net Interchange Schedule (NIS) is the algebraic sum of all energy scheduled to flow into or out of a balancing area during any interval. Accurate forecasts for NIS are important so that the Area Control Error (ACE) stays within an acceptable limit. To date, there are many approaches for forecasting NIS but all nonemore » of these are based on single models that can be sensitive to time of day and day of week effects.« less
Scharett, Emma; Madathil, Kapil Chalil; Lopes, Snehal; Rogers, Hunter; Agnisarman, Sruthy; Narasimha, Shraddhaa; Ashok, Aparna; Dye, Cheryl
2017-10-01
Caregivers of Alzheimer's patients find respite in online communities for solutions and emotional support. This study aims to understand the characteristics of information caregivers of Alzheimer's patients are searching for and the kind of support they receive through Internet-based peer support communities. Using a Web crawler written in Python Web programming language, we retrieved publicly available 2,500 random posts and their respective solutions from April 2012 to October 2016 on the solutions category of the Caregiver's Forum on ALZConnected.org . A content analysis was conducted on these randomly selected posts and 4,219 responses to those posts based on a classification system were derived from initial analyses of 750 posts and related responses. The results showed most posts (26%) related to queries about Alzheimer's symptoms, and the highest percentage of responses (45.56%) pertained to caregiver well-being. The LIWC analyses generated an average tone rating of 27.27 for the posts, implying a negative tone and 65.17 for their responses, implying a slightly positive tone. The ALZConnected.org Web site has the potential of being an emotionally supportive tool for caregivers; however, a more user-friendly interface is required to accommodate the needs of most caregivers and their technological skills. Solutions offered on the peer support groups are often subjective opinions of other caregivers and should not be considered professional or comprehensive; further research on educating caregivers using online forums is necessary.
Penning, David A; Dartez, Schuyler F
2016-03-01
Constriction is a prey-immobilization technique used by many snakes and is hypothesized to have been important to the evolution and diversification of snakes. However, very few studies have examined the factors that affect constriction performance. We investigated constriction performance in ball pythons (Python regius) by evaluating how peak constriction pressure is affected by snake size, sex, and experience. In one experiment, we tested the ontogenetic scaling of constriction performance and found that snake diameter was the only significant factor determining peak constriction pressure. The number of loops applied in a coil and its interaction with snake diameter did not significantly affect constriction performance. Constriction performance in ball pythons scaled differently than in other snakes that have been studied, and medium to large ball pythons are capable of exerting significantly higher pressures than those shown to cause circulatory arrest in prey. In a second experiment, we tested the effects of experience on constriction performance in hatchling ball pythons over 10 feeding events. By allowing snakes in one test group to gain constriction experience, and manually feeding snakes under sedation in another test group, we showed that experience did not affect constriction performance. During their final (10th) feedings, all pythons constricted similarly and with sufficiently high pressures to kill prey rapidly. At the end of the 10 feeding trials, snakes that were allowed to constrict were significantly smaller than their non-constricting counterparts. © 2016 Wiley Periodicals, Inc.
Parallel, Distributed Scripting with Python
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P J
2002-05-24
Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI librarymore » gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.« less
NASA Astrophysics Data System (ADS)
Larour, Eric; Cheng, Daniel; Perez, Gilberto; Quinn, Justin; Morlighem, Mathieu; Duong, Bao; Nguyen, Lan; Petrie, Kit; Harounian, Silva; Halkides, Daria; Hayes, Wayne
2017-12-01
Earth system models (ESMs) are becoming increasingly complex, requiring extensive knowledge and experience to deploy and use in an efficient manner. They run on high-performance architectures that are significantly different from the everyday environments that scientists use to pre- and post-process results (i.e., MATLAB, Python). This results in models that are hard to use for non-specialists and are increasingly specific in their application. It also makes them relatively inaccessible to the wider science community, not to mention to the general public. Here, we present a new software/model paradigm that attempts to bridge the gap between the science community and the complexity of ESMs by developing a new JavaScript application program interface (API) for the Ice Sheet System Model (ISSM). The aforementioned API allows cryosphere scientists to run ISSM on the client side of a web page within the JavaScript environment. When combined with a web server running ISSM (using a Python API), it enables the serving of ISSM computations in an easy and straightforward way. The deep integration and similarities between all the APIs in ISSM (MATLAB, Python, and now JavaScript) significantly shortens and simplifies the turnaround of state-of-the-art science runs and their use by the larger community. We demonstrate our approach via a new Virtual Earth System Laboratory (VESL) website (http://vesl.jpl.nasa.gov, VESL(2017)).
78 FR 44275 - Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... 1018-AV68 Evaluation; Constrictor Species From Python, Boa, and Eunectes Genera. National Park Service... Species From Python, Boa, and Eunectes Genera Legal Authority: 18 U.S.C. 42 Abstract: We are making a... Lacey Act: Reticulated python, DeSchauensee's anaconda, green anaconda, and Beni anaconda. The boa...
Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook
NASA Astrophysics Data System (ADS)
Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.
2012-12-01
The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system. Outputs can be summarised and visualised using the full power of Python's many scientific tools, including Scipy, Matplotlib, Pandas and CDAT. This rich user experience is delivered through the user's web browser; maintaining the interactive feel of a workstation-based environment with the parallel power of a remote data-centric processing facility.
KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations
NASA Astrophysics Data System (ADS)
Leetmaa, Mikael; Skorodumova, Natalia V.
2014-09-01
KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12.4 LTS, CentOS release 5.9, Mac OSX 10.5.8 and Mac OSX 10.8.2, but should run on any system that can have a C++ compiler, MPI and a Python interpreter. Has the code been vectorized or parallelized?: Yes. From one to hundreds of processors depending on the type of input and simulation. RAM: From a few megabytes to several gigabytes depending on input parameters and the size of the system to simulate. Classification: 4.13, 16.13. External routines: KMCLib uses an external Mersenne Twister pseudo random number generator that is included in the code. A Python 2.7 interpreter and a standard C++ runtime library are needed to run the serial version of the code. For running the parallel version an MPI implementation is needed, such as e.g. MPICH from http://www.mpich.org or Open-MPI from http://www.open-mpi.org. SWIG (obtainable from http://www.swig.org/) and CMake (obtainable from http://www.cmake.org/) are needed for building the backend module, Sphinx (obtainable from http://sphinx-doc.org) for building the documentation and CPPUNIT (obtainable from http://sourceforge.net/projects/cppunit/) for building the C++ unit tests. Nature of problem: Atomic scale simulation of slowly evolving dynamics is a great challenge in many areas of computational materials science and catalysis. When the rare-events dynamics of interest is orders of magnitude slower than the typical atomic vibrational frequencies a straight-forward propagation of the equations of motions for the particles in the simulation cannot reach time scales of relevance for modeling the slow dynamics. Solution method: KMCLib provides an implementation of the kinetic Monte Carlo (KMC) method that solves the slow dynamics problem by utilizing the separation of time scales between fast vibrational motion and the slowly evolving rare-events dynamics. Only the latter is treated explicitly and the system is simulated as jumping between fully equilibrated local energy minima on the slow-dynamics potential energy surface. Restrictions: KMCLib implements the lattice KMC method and is as such restricted to geometries that can be expressed on a grid in space. Unusual features: KMCLib has been designed to be easily customized, to allow for user-defined functionality and integration with other codes. The user can define her own on-the-fly rate calculator via a Python API, so that site-specific elementary process rates, or rates depending on long-range interactions or complex geometrical features can easily be included. KMCLib also allows for on-the-fly analysis with user-defined analysis modules. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Additional comments: The full documentation of the program is distributed with the code and can also be found at http://www.github.com/leetmaa/KMCLib/manual Running time: rom a few seconds to several days depending on the type of simulation and input parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures
NASA Astrophysics Data System (ADS)
Liu, Qingbin; Li, Jiang; Liu, Jie
2017-02-01
Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub.
gadfly: A pandas-based Framework for Analyzing GADGET Simulation Data
NASA Astrophysics Data System (ADS)
Hummel, Jacob A.
2016-11-01
We present the first public release (v0.1) of the open-source gadget Dataframe Library: gadfly. The aim of this package is to leverage the capabilities of the broader python scientific computing ecosystem by providing tools for analyzing simulation data from the astrophysical simulation codes gadget and gizmo using pandas, a thoroughly documented, open-source library providing high-performance, easy-to-use data structures that is quickly becoming the standard for data analysis in python. Gadfly is a framework for analyzing particle-based simulation data stored in the HDF5 format using pandas DataFrames. The package enables efficient memory management, includes utilities for unit handling, coordinate transformations, and parallel batch processing, and provides highly optimized routines for visualizing smoothed-particle hydrodynamics data sets.
BYMUR software: a free and open source tool for quantifying and visualizing multi-risk analyses
NASA Astrophysics Data System (ADS)
Tonini, Roberto; Selva, Jacopo
2013-04-01
The BYMUR software aims to provide an easy-to-use open source tool for both computing multi-risk and managing/visualizing/comparing all the inputs (e.g. hazard, fragilities and exposure) as well as the corresponding results (e.g. risk curves, risk indexes). For all inputs, a complete management of inter-model epistemic uncertainty is considered. The BYMUR software will be one of the final products provided by the homonymous ByMuR project (http://bymur.bo.ingv.it/) funded by Italian Ministry of Education, Universities and Research (MIUR), focused to (i) provide a quantitative and objective general method for a comprehensive long-term multi-risk analysis in a given area, accounting for inter-model epistemic uncertainty through Bayesian methodologies, and (ii) apply the methodology to seismic, volcanic and tsunami risks in Naples (Italy). More specifically, the BYMUR software will be able to separately account for the probabilistic hazard assessment of different kind of hazardous phenomena, the relative (time-dependent/independent) vulnerabilities and exposure data, and their possible (predefined) interactions: the software will analyze these inputs and will use them to estimate both single- and multi- risk associated to a specific target area. In addition, it will be possible to connect the software to further tools (e.g., a full hazard analysis), allowing a dynamic I/O of results. The use of Python programming language guarantees that the final software will be open source and platform independent. Moreover, thanks to the integration of some most popular and rich-featured Python scientific modules (Numpy, Matplotlib, Scipy) with the wxPython graphical user toolkit, the final tool will be equipped with a comprehensive Graphical User Interface (GUI) able to control and visualize (in the form of tables, maps and/or plots) any stage of the multi-risk analysis. The additional features of importing/exporting data in MySQL databases and/or standard XML formats (for instance, the global standards defined in the frame of GEM project for seismic hazard and risk) will grant the interoperability with other FOSS software and tools and, at the same time, to be on hand of the geo-scientific community. An already available example of connection is represented by the BET_VH(**) tool, which probabilistic volcanic hazard outputs will be used as input for BYMUR. Finally, the prototype version of BYMUR will be used for the case study of the municipality of Naples, by considering three different natural hazards (volcanic eruptions, earthquakes and tsunamis) and by assessing the consequent long-term risk evaluation. (**)BET_VH (Bayesian Event Tree for Volcanic Hazard) is probabilistic tool for long-term volcanic hazard assessment, recently re-designed and adjusted to be run on the Vhub cyber-infrastructure, a free web-based collaborative tool in volcanology research (see http://vhub.org/resources/betvh).
Castoe, Todd A; de Koning, Jason A P; Hall, Kathryn T; Yokoyama, Ken D; Gu, Wanjun; Smith, Eric N; Feschotte, Cédric; Uetz, Peter; Ray, David A; Dobry, Jason; Bogden, Robert; Mackessy, Stephen P; Bronikowski, Anne M; Warren, Wesley C; Secor, Stephen M; Pollock, David D
2011-07-28
The Consortium for Snake Genomics is in the process of sequencing the genome and creating transcriptomic resources for the Burmese python. Here, we describe how this will be done, what analyses this work will include, and provide a timeline.
SpiceyPy, a Python Wrapper for SPICE
NASA Astrophysics Data System (ADS)
Annex, A.
2017-06-01
SpiceyPy is an open source Python wrapper for the NAIF SPICE toolkit. It is available for macOS, Linux, and Windows platforms and for Python versions 2.7.x and 3.x as well as Anaconda. SpiceyPy can be installed by running: “pip install spiceypy.”
Schilliger, Lionel; Tréhiou-Sechi, Emilie; Petit, Amandine M P; Misbach, Charlotte; Chetboul, Valérie
2010-12-01
Ultrasonography, and, to a lesser extent, echocardiography are now well-established, noninvasive, and painless diagnostic tools in herpetologic medicine. Various cardiac lesions have been previously described in reptiles, but valvulopathy is rarely documented in these animals and, consequently, is poorly understood. In this report, sinoatrial and atrioventricular insufficiencies were diagnosed in a 5-yr-old captive dyspneic Burmese python (Python molurus bivittatus) on the basis of echocardiographic and Doppler examination. This case report is the first to document Doppler assessment of valvular regurgitations in a reptile.
Report on the ''ESO Python Boot Camp — Pilot Version''
NASA Astrophysics Data System (ADS)
Dias, B.; Milli, J.
2017-03-01
The Python programming language is becoming very popular within the astronomical community. Python is a high-level language with multiple applications including database management, handling FITS images and tables, statistical analysis, and more advanced topics. Python is a very powerful tool both for astronomical publications and for observatory operations. Since the best way to learn a new programming language is through practice, we therefore organised a two-day hands-on workshop to share expertise among ESO colleagues. We report here the outcome and feedback from this pilot event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmus, Jonathan J.; Collis, Scott M.
The Python ARM Radar Toolkit is a package for reading, visualizing, correcting and analysing data from weather radars. Development began to meet the needs of the Atmospheric Radiation Measurement Climate Research Facility and has since expanded to provide a general-purpose framework for working with data from weather radars in the Python programming language. The toolkit is built on top of libraries in the Scientific Python ecosystem including NumPy, SciPy, and matplotlib, and makes use of Cython for interfacing with existing radar libraries written in C and to speed up computationally demanding algorithms. As a result, the source code for themore » toolkit is available on GitHub and is distributed under a BSD license.« less
Hines, Michael L; Davison, Andrew P; Muller, Eilif
2009-01-01
The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURON's Import3D and CellBuild tools to read MorphML and NeuroML model specifications.
Hines, Michael L.; Davison, Andrew P.; Muller, Eilif
2008-01-01
The NEURON simulation program now allows Python to be used, alone or in combination with NEURON's traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURON's Import3D and CellBuild tools to read MorphML and NeuroML model specifications. PMID:19198661
Helmus, Jonathan J.; Collis, Scott M.
2016-07-18
The Python ARM Radar Toolkit is a package for reading, visualizing, correcting and analysing data from weather radars. Development began to meet the needs of the Atmospheric Radiation Measurement Climate Research Facility and has since expanded to provide a general-purpose framework for working with data from weather radars in the Python programming language. The toolkit is built on top of libraries in the Scientific Python ecosystem including NumPy, SciPy, and matplotlib, and makes use of Cython for interfacing with existing radar libraries written in C and to speed up computationally demanding algorithms. As a result, the source code for themore » toolkit is available on GitHub and is distributed under a BSD license.« less
Ciavaglia, Sherryn A; Tobe, Shanan S; Donnellan, Stephen C; Henry, Julianne M; Linacre, Adrian M T
2015-05-01
Python snake species are often encountered in illegal activities and the question of species identity can be pertinent to such criminal investigations. Morphological identification of species of pythons can be confounded by many issues and molecular examination by DNA analysis can provide an alternative and objective means of identification. Our paper reports on the development and validation of a PCR primer pair that amplifies a segment of the mitochondrial cytochrome b gene that has been suggested previously as a good candidate locus for differentiating python species. We used this DNA region to perform species identification of pythons, even when the template DNA was of poor quality, as might be the case with forensic evidentiary items. Validation tests are presented to demonstrate the characteristics of the assay. Tests involved the cross-species amplification of this marker in non-target species, minimum amount of DNA template required, effects of degradation on product amplification and a blind trial to simulate a casework scenario that provided 100% correct identity. Our results demonstrate that this assay performs reliably and robustly on pythons and can be applied directly to forensic investigations where the presence of a species of python is in question. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zollweg, J. A.
2017-10-01
Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don't see millions of cubes of atmosphere; we see a thunderstorm `object'. Temporally, we don't see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain's perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA's High-Resolution Rapid Refresh v2 (HRRRv2) data stream.
NASA Astrophysics Data System (ADS)
Vallis, Geoffrey K.; Colyer, Greg; Geen, Ruth; Gerber, Edwin; Jucker, Martin; Maher, Penelope; Paterson, Alexander; Pietschnig, Marianne; Penn, James; Thomson, Stephen I.
2018-03-01
Isca is a framework for the idealized modelling of the global circulation of planetary atmospheres at varying levels of complexity and realism. The framework is an outgrowth of models from the Geophysical Fluid Dynamics Laboratory in Princeton, USA, designed for Earth's atmosphere, but it may readily be extended into other planetary regimes. Various forcing and radiation options are available, from dry, time invariant, Newtonian thermal relaxation to moist dynamics with radiative transfer. Options are available in the dry thermal relaxation scheme to account for the effects of obliquity and eccentricity (and so seasonality), different atmospheric optical depths and a surface mixed layer. An idealized grey radiation scheme, a two-band scheme, and a multiband scheme are also available, all with simple moist effects and astronomically based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models. For Earth modelling, options include an aquaplanet and configurable continental outlines and topography. Continents may be defined by changing albedo, heat capacity, and evaporative parameters and/or by using a simple bucket hydrology model. Oceanic Q fluxes may be added to reproduce specified sea surface temperatures, with arbitrary continental distributions. Planetary atmospheres may be configured by changing planetary size and mass, solar forcing, atmospheric mass, radiation, and other parameters. Examples are given of various Earth configurations as well as a giant planet simulation, a slowly rotating terrestrial planet simulation, and tidally locked and other orbitally resonant exoplanet simulations. The underlying model is written in Fortran and may largely be configured with Python scripts. Python scripts are also used to run the model on different architectures, to archive the output, and for diagnostics, graphics, and post-processing. All of these features are publicly available in a Git-based repository.
SunPy 0.8 - Python for Solar Physics
NASA Astrophysics Data System (ADS)
Inglis, Andrew; Bobra, Monica; Christe, Steven; Hewett, Russell; Ireland, Jack; Mumford, Stuart; Martinez Oliveros, Juan Carlos; Perez-Suarez, David; Reardon, Kevin P.; Savage, Sabrina; Shih, Albert Y.; Ryan, Daniel; Sipocz, Brigitta; Freij, Nabil
2017-08-01
SunPy is a community-developed open-source software library for solar physics. It is written in Python, a free, cross-platform, general-purpose, high-level programming language which is being increasingly adopted throughout the scientific community. Python is one of the top ten most often used programming languages, as such it provides a wide array of software packages, such as numerical computation (NumPy, SciPy), machine learning (scikit-learn), signal processing (scikit-image, statsmodels) to visualization and plotting (matplotlib, mayavi). SunPy aims to provide the software for obtaining and analyzing solar and heliospheric data. This poster introduces a new major release of SunPy (0.8). This release includes two major new functionalities, as well as a number of bug fixes. It is based on 1120 contributions from 34 unique contributors. Fido is the new primary interface to download data. It provides a consistent and powerful search interface to all major data sources provides including VSO, JSOC, as well as individual data sources such as GOES XRS time series and and is fully pluggable to add new data sources, i.e. DKIST. In anticipation of Solar Orbiter and the Parker Solar Probe, SunPy now provides a powerful way of representing coordinates, allowing conversion between coordinate systems and viewpoints of different instruments, including preliminary reprojection capabilities. Other new features including new timeseries capabilities with better support for concatenation and metadata, updated documentation and example gallery. SunPy is distributed through pip and conda and all of its code is publicly available (sunpy.org).
Hodshon, Rebecca T; Sura, Patricia A; Schumacher, Juergen P; Odoi, Agricola; Steeil, James C; Newkirk, Kim M
2013-03-01
To evaluate first-intention healing of CO(2) laser, 4.0-MHz radiowave radiosurgery (RWRS), and scalpel incisions in ball pythons (Python regius). 6 healthy adult ball pythons. A skin biopsy sample was collected, and 2-cm skin incisions (4/modality) were made in each snake under anesthesia and closed with surgical staples on day 0. Incision sites were grossly evaluated and scored daily. One skin biopsy sample per incision type per snake was obtained on days 2, 7, 14, and 30. Necrotic and fibroplastic tissue was measured in histologic sections; samples were assessed and scored for total inflammation, histologic response (based on the measurement of necrotic and fibroplastic tissues and total inflammation score), and other variables. Frequency distributions of gross and histologic variables associated with wound healing were calculated. Gross wound scores were significantly greater (indicating greater separation of wound edges) for laser incisions than for RWRS and scalpel incisions at all evaluated time points. Necrosis was significantly greater in laser and RWRS incisions than in scalpel incision sites on days 2 and 14 and days 2 and 7, respectively; fibroplasia was significantly greater in laser than in scalpel incision sites on day 30. Histologic response scores were significantly lower for scalpel than for other incision modalities on days 2, 14, and 30. In snakes, skin incisions made with a scalpel generally had less necrotic tissue than did CO(2) laser and RWRS incisions. Comparison of the 3 modalities on the basis of histologic response scores indicated that use of a scalpel was preferable, followed by RWRS and then laser.
Renal plasticity in response to feeding in the Burmese python, Python molurus bivittatus.
Esbaugh, A J; Secor, S M; Grosell, M
2015-10-01
Burmese pythons are sit-and-wait predators that are well adapted to go long periods without food, yet subsequently consume and digest single meals that can exceed their body weight. These large feeding events result in a dramatic alkaline tide that is compensated by a hypoventilatory response that normalizes plasma pH; however, little is known regarding how plasma HCO3(-) is lowered in the days post-feeding. The current study demonstrated that Burmese pythons contain the cellular machinery for renal acid-base compensation and actively remodel the kidney to limit HCO3(-) reabsorption in the post-feeding period. After being fed a 25% body weight meal plasma total CO2 was elevated by 1.5-fold after 1 day, but returned to control concentrations by 4 days post-feeding (d pf). Gene expression analysis was used to verify the presence of carbonic anhydrase (CA) II, IV and XIII, Na(+) H(+) exchanger 3 (NHE3), the Na(+) HCO3(-) co-transporter (NBC) and V-type ATPase. CA IV expression was significantly down-regulated at 3 dpf versus fasted controls. This was supported by activity analysis that showed a significant decrease in the amount of GPI-linked CA activity in isolated kidney membranes at 3 dpf versus fasted controls. In addition, V-type ATPase activity was significantly up-regulated at 3 dpf; no change in gene expression was observed. Both CA II and NHE3 expression was up-regulated at 3 dpf, which may be related to post-prandial ion balance. These results suggest that Burmese pythons actively remodel their kidney after feeding, which would in part benefit renal HCO3(-) clearance. Copyright © 2015 Elsevier Inc. All rights reserved.
The role of nitric oxide in regulation of the cardiovascular system in reptiles.
Skovgaard, Nini; Galli, Gina; Abe, Augusto; Taylor, Edwin W; Wang, Tobias
2005-10-01
The roles that nitric oxide (NO) plays in the cardiovascular system of reptiles are reviewed, with particular emphasis on its effects on central vascular blood flows in the systemic and pulmonary circulations. New data is presented that describes the effects on hemodynamic variables in varanid lizards of exogenously administered NO via the nitric oxide donor sodium nitroprusside (SNP) and inhibition of nitric oxide synthase (NOS) by l-nitroarginine methyl ester (l-NAME). Furthermore, preliminary data on the effects of SNP on hemodynamic variables in the tegu lizard are presented. The findings are compared with previously published data from our laboratory on three other species of reptiles: pythons (), rattlesnakes () and turtles (). These five species of reptiles possess different combinations of division of the heart and structural complexity of the lungs. Comparison of their responses to NO donors and NOS inhibitors may reveal whether the potential contribution of NO to vascular tone correlates with pulmonary complexity and/or with blood pressure. All existing studies on reptiles have clearly established a potential role for NO in regulating vascular tone in the systemic circulation and NO may be important for maintaining basal systemic vascular tone in varanid lizards, pythons and turtles, through a continuous release of NO. In contrast, the pulmonary circulation is less responsive to NO donors or NOS inhibitors, and it was only in pythons and varanid lizards that the lungs responded to SNP. Both species have a functionally separated heart, so it is possible that NO may exert a larger role in species with low pulmonary blood pressures, irrespective of lung complexity.
IPython: components for interactive and parallel computing across disciplines. (Invited)
NASA Astrophysics Data System (ADS)
Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.
2013-12-01
Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.
A parvovirus isolated from royal python (Python regius) is a member of the genus Dependovirus.
Farkas, Szilvia L; Zádori, Zoltán; Benko, Mária; Essbauer, Sandra; Harrach, Balázs; Tijssen, Peter
2004-03-01
Parvoviruses were isolated from Python regius and Boa constrictor snakes and propagated in viper heart (VH-2) and iguana heart (IgH-2) cells. The full-length genome of a snake parvovirus was cloned and both strands were sequenced. The organization of the 4432-nt-long genome was found to be typical of parvoviruses. This genome was flanked by inverted terminal repeats (ITRs) of 154 nt, containing 122 nt terminal hairpins and contained two large open reading frames, encoding the non-structural and structural proteins. Genes of this new parvovirus were most similar to those from waterfowl parvoviruses and from adeno-associated viruses (AAVs), albeit to a relatively low degree and with some organizational differences. The structure of its ITRs also closely resembled those of AAVs. Based on these data, we propose to classify this virus, the first serpentine parvovirus to be identified, as serpentine adeno-associated virus (SAAV) in the genus Dependovirus.
Data visualization and analysis tools for the MAVEN mission
NASA Astrophysics Data System (ADS)
Harter, B.; De Wolfe, A. W.; Putnam, B.; Brain, D.; Chaffin, M.
2016-12-01
The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. We have developed new software tools for exploring and analyzing the science data. Our open-source Python toolkit for working with data from MAVEN and other missions is based on the widely-used "tplot" IDL toolkit. We have replicated all of the basic tplot functionality in Python, and use the bokeh and matplotlib libraries to generate interactive line plots and spectrograms, providing additional functionality beyond the capabilities of IDL graphics. These Python tools are generalized to work with missions beyond MAVEN, and our software is available on Github. We have also been exploring 3D graphics as a way to better visualize the MAVEN science data and models. We have constructed a 3D visualization of MAVEN's orbit using the CesiumJS library, which not only allows viewing of MAVEN's orientation and position, but also allows the display of selected science data sets and their variation over time.
New Python-based methods for data processing
Sauter, Nicholas K.; Hattne, Johan; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel
2013-01-01
Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h−1) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid data analysis, as it is important to be able to inspect the data quickly in order to guide the experiment in real time. By utilizing readily available web-serving tools that interact with the Python scripting language, it was possible to implement a high-throughput Bragg-spot analyzer (cctbx.spotfinder) that is presently in use at numerous synchrotron-radiation beamlines. Similarly, Python interoperability enabled the production of a new data-reduction package (cctbx.xfel) for serial femtosecond crystallography experiments at the Linac Coherent Light Source (LCLS). Future data-reduction efforts will need to focus on specialized problems such as the treatment of diffraction spots on interleaved lattices arising from multi-crystal specimens. In these challenging cases, accurate modeling of close-lying Bragg spots could benefit from the high-performance computing capabilities of graphics-processing units. PMID:23793153
PyBoolNet: a python package for the generation, analysis and visualization of boolean networks.
Klarner, Hannes; Streck, Adam; Siebert, Heike
2017-03-01
The goal of this project is to provide a simple interface to working with Boolean networks. Emphasis is put on easy access to a large number of common tasks including the generation and manipulation of networks, attractor and basin computation, model checking and trap space computation, execution of established graph algorithms as well as graph drawing and layouts. P y B ool N et is a Python package for working with Boolean networks that supports simple access to model checking via N u SMV, standard graph algorithms via N etwork X and visualization via dot . In addition, state of the art attractor computation exploiting P otassco ASP is implemented. The package is function-based and uses only native Python and N etwork X data types. https://github.com/hklarner/PyBoolNet. hannes.klarner@fu-berlin.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Tethered Forth system for FPGA applications
NASA Astrophysics Data System (ADS)
Goździkowski, Paweł; Zabołotny, Wojciech M.
2013-10-01
This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.
UCam: universal camera controller and data acquisition system
NASA Astrophysics Data System (ADS)
McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.
2010-07-01
This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.
Geospatial Data Stream Processing in Python Using FOSS4G Components
NASA Astrophysics Data System (ADS)
McFerren, G.; van Zyl, T.
2016-06-01
One viewpoint of current and future IT systems holds that there is an increase in the scale and velocity at which data are acquired and analysed from heterogeneous, dynamic sources. In the earth observation and geoinformatics domains, this process is driven by the increase in number and types of devices that report location and the proliferation of assorted sensors, from satellite constellations to oceanic buoy arrays. Much of these data will be encountered as self-contained messages on data streams - continuous, infinite flows of data. Spatial analytics over data streams concerns the search for spatial and spatio-temporal relationships within and amongst data "on the move". In spatial databases, queries can assess a store of data to unpack spatial relationships; this is not the case on streams, where spatial relationships need to be established with the incomplete data available. Methods for spatially-based indexing, filtering, joining and transforming of streaming data need to be established and implemented in software components. This article describes the usage patterns and performance metrics of a number of well known FOSS4G Python software libraries within the data stream processing paradigm. In particular, we consider the RTree library for spatial indexing, the Shapely library for geometric processing and transformation and the PyProj library for projection and geodesic calculations over streams of geospatial data. We introduce a message oriented Python-based geospatial data streaming framework called Swordfish, which provides data stream processing primitives, functions, transports and a common data model for describing messages, based on the Open Geospatial Consortium Observations and Measurements (O&M) and Unidata Common Data Model (CDM) standards. We illustrate how the geospatial software components are integrated with the Swordfish framework. Furthermore, we describe the tight temporal constraints under which geospatial functionality can be invoked when processing high velocity, potentially infinite geospatial data streams. The article discusses the performance of these libraries under simulated streaming loads (size, complexity and volume of messages) and how they can be deployed and utilised with Swordfish under real load scenarios, illustrated by a set of Vessel Automatic Identification System (AIS) use cases. We conclude that the described software libraries are able to perform adequately under geospatial data stream processing scenarios - many real application use cases will be handled sufficiently by the software.
Brough, David B; Wheeler, Daniel; Kalidindi, Surya R
2017-03-01
There is a critical need for customized analytics that take into account the stochastic nature of the internal structure of materials at multiple length scales in order to extract relevant and transferable knowledge. Data driven Process-Structure-Property (PSP) linkages provide systemic, modular and hierarchical framework for community driven curation of materials knowledge, and its transference to design and manufacturing experts. The Materials Knowledge Systems in Python project (PyMKS) is the first open source materials data science framework that can be used to create high value PSP linkages for hierarchical materials that can be leveraged by experts in materials science and engineering, manufacturing, machine learning and data science communities. This paper describes the main functions available from this repository, along with illustrations of how these can be accessed, utilized, and potentially further refined by the broader community of researchers.
Brough, David B; Wheeler, Daniel; Kalidindi, Surya R.
2017-01-01
There is a critical need for customized analytics that take into account the stochastic nature of the internal structure of materials at multiple length scales in order to extract relevant and transferable knowledge. Data driven Process-Structure-Property (PSP) linkages provide systemic, modular and hierarchical framework for community driven curation of materials knowledge, and its transference to design and manufacturing experts. The Materials Knowledge Systems in Python project (PyMKS) is the first open source materials data science framework that can be used to create high value PSP linkages for hierarchical materials that can be leveraged by experts in materials science and engineering, manufacturing, machine learning and data science communities. This paper describes the main functions available from this repository, along with illustrations of how these can be accessed, utilized, and potentially further refined by the broader community of researchers. PMID:28690971
A comparative study of programming languages for next-generation astrodynamics systems
NASA Astrophysics Data System (ADS)
Eichhorn, Helge; Cano, Juan Luis; McLean, Frazer; Anderl, Reiner
2018-03-01
Due to the computationally intensive nature of astrodynamics tasks, astrodynamicists have relied on compiled programming languages such as Fortran for the development of astrodynamics software. Interpreted languages such as Python, on the other hand, offer higher flexibility and development speed thereby increasing the productivity of the programmer. While interpreted languages are generally slower than compiled languages, recent developments such as just-in-time (JIT) compilers or transpilers have been able to close this speed gap significantly. Another important factor for the usefulness of a programming language is its wider ecosystem which consists of the available open-source packages and development tools such as integrated development environments or debuggers. This study compares three compiled languages and three interpreted languages, which were selected based on their popularity within the scientific programming community and technical merit. The three compiled candidate languages are Fortran, C++, and Java. Python, Matlab, and Julia were selected as the interpreted candidate languages. All six languages are assessed and compared to each other based on their features, performance, and ease-of-use through the implementation of idiomatic solutions to classical astrodynamics problems. We show that compiled languages still provide the best performance for astrodynamics applications, but JIT-compiled dynamic languages have reached a competitive level of speed and offer an attractive compromise between numerical performance and programmer productivity.
James, Lauren E; Williams, Catherine Ja; Bertelsen, Mads F; Wang, Tobias
2018-05-01
To characterise the minimum dose of intramuscular alfaxalone required to facilitate intubation for mechanical ventilation, and to investigate the impact of cranial versus caudal injection on anaesthetic depth. Randomised crossover study. Six healthy juvenile ball pythons (Python regius). Three dosages (10, 20 and 30 mg kg -1 ) of alfaxalone were administered to each python in a caudal location with a minimum 2 weeks washout. Induction and recovery were monitored by assessing muscle tone, righting reflex, response to a noxious stimulus and the ability to intubate. A subsequent experiment assessed the influence of injection site by comparing administration of 20 mg kg -1 alfaxalone in a cranial location (1 cm cranial to the heart) with the caudal site. Respiration rate was monitored throughout, and when intubation was possible, snakes were mechanically ventilated. Regardless of dose and injection site, maximum effect was reached within 10.0 ± 2.7 minutes. When administered at the caudal injection site, intubation was only successful after a dosage of 30 mg kg- 1 , which is higher than in previous reports for other reptiles. However, intubation was possible in all cases after 7.2 ± 1.6 minutes upon cranial administration of 20 mg kg -1 , and anaesthetic duration was significantly lengthened (p < 0.001). Both 30 mg kg -1 at the caudal site and 20 mg kg -1 at the cranial site led to apnoea approximately 10 minutes post-injection, at which time the snakes were intubated and mechanically ventilated. Alfaxalone provided rapid, smooth induction when administered intramuscularly to pythons, and may serve as a useful induction agent prior to provision of volatile anaesthetics. The same dosage injected in the cranial site led to deeper anaesthesia than when injected caudally, suggesting that shunting to the liver and first-pass metabolism of alfaxalone occur when injected caudally, via the renal portal system. Copyright © 2018 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.
Evaluating GPS biologging technology for studying spatial ecology of large constricting snakes
Smith, Brian; Hart, Kristen M.; Mazzotti, Frank J.; Basille, Mathieu; Romagosa, Christina M.
2018-01-01
Background: GPS telemetry has revolutionized the study of animal spatial ecology in the last two decades. Until recently, it has mainly been deployed on large mammals and birds, but the technology is rapidly becoming miniaturized, and applications in diverse taxa are becoming possible. Large constricting snakes are top predators in their ecosystems, and accordingly they are often a management priority, whether their populations are threatened or invasive. Fine-scale GPS tracking datasets could greatly improve our ability to understand and manage these snakes, but the ability of this new technology to deliver high-quality data in this system is unproven. In order to evaluate GPS technology in large constrictors, we GPS-tagged 13 Burmese pythons (Python bivittatus) in Everglades National Park and deployed an additional 7 GPS tags on stationary platforms to evaluate habitat-driven biases in GPS locations. Both python and test platform GPS tags were programmed to attempt a GPS fix every 90 min.Results: While overall fix rates for the tagged pythons were low (18.1%), we were still able to obtain an average of 14.5 locations/animal/week, a large improvement over once-weekly VHF tracking. We found overall accuracy and precision to be very good (mean accuracy = 7.3 m, mean precision = 12.9 m), but a very few imprecise locations were still recorded (0.2% of locations with precision > 1.0 km). We found that dense vegetation did decrease fix rate, but we concluded that the low observed fix rate was also due to python microhabitat selection underground or underwater. Half of our recovered pythons were either missing their tag or the tag had malfunctioned, resulting in no data being recovered.Conclusions: GPS biologging technology is a promising tool for obtaining frequent, accurate, and precise locations of large constricting snakes. We recommend future studies couple GPS telemetry with frequent VHF locations in order to reduce bias and limit the impact of catastrophic failures on data collection, and we recommend improvements to GPS tag design to lessen the frequency of these failures.
Gardiner, David W; Baines, Frances M; Pandher, Karamjeet
2009-12-01
A male ball python (Python regius) and a female blue tongue skink (Tiliqua spp.) of unknown age were evaluated for anorexia, lethargy, excessive shedding, corneal opacity (python), and weight loss (skink) of approximately three weeks' duration. These animals represented the worst affected animals from a private herpetarium where many animals exhibited similar signs. At necropsy, the python had bilateral corneal opacity and scattered moderate dysecdysis. The skink had mild dysecdysis, poor body condition, moderate intestinal nematodiasis, and mild liver atrophy. Microscopic evaluation revealed epidermal erosion and ulceration, with severe epidermal basal cell degeneration and necrosis, and superficial dermatitis (python and skink). Severe bilateral ulcerative keratoconjunctivitis with bacterial colonization was noted in the ball python. Microscopic findings within the skin and eyes were suggestive of ultraviolet (UV) radiation damage or of photodermatitis and photokeratoconjunctivitis. Removal of the recently installed new lamps from the terrariums of the surviving reptiles resulted in resolution of clinical signs. Evaluation of a sample lamp of the type associated with these cases revealed an extremely high UV output, including very-short-wavelength UVB, neither found in natural sunlight nor emitted by several other UVB lamps unassociated with photokeratoconjunctivitis. Exposure to high-intensity and/or inappropriate wavelengths of UV radiation may be associated with significant morbidity, and even mortality, in reptiles. Veterinarians who are presented with reptiles with ocular and/or cutaneous disease of unapparent cause should fully evaluate the specifics of the vivarium light sources. Further research is needed to determine the characteristics of appropriate and of toxic UV light for reptiles kept in captivity.
Huder, Jon B.; Böni, Jürg; Hatt, Jean-Michel; Soldati, Guido; Lutz, Hans; Schüpbach, Jörg
2002-01-01
Boid inclusion body disease (BIBD) is a fatal disorder of boid snakes that is suspected to be caused by a retrovirus. In order to identify this agent, leukocyte cultures (established from Python molurus specimens with symptoms of BIBD or kept together with such diseased animals) were assessed for reverse transcriptase (RT) activity. Virus from cultures exhibiting high RT activity was banded on sucrose density gradients, and the RT peak fraction was subjected to highly efficient procedures for the identification of unknown particle-associated retroviral RNA. A 7-kb full retroviral sequence was identified, cloned, and sequenced. This virus contained intact open reading frames (ORFs) for gag, pro, pol, and env, as well as another ORF of unknown function within pol. Phylogenetic analysis showed that the virus is distantly related to viruses from both the B and D types and the mammalian C type but cannot be classified. It is present as a highly expressed endogenous retrovirus in all P. molurus individuals; a closely related, but much less expressed virus was found in all tested Python curtus individuals. All other boid snakes tested, including Python regius, Python reticulatus, Boa constrictor, Eunectes notaeus, and Morelia spilota, were virus negative, independent of whether they had BIBD or not. Virus isolated from P. molurus could not be transmitted to the peripheral blood mononuclear cells of B. constrictor or P. regius. Thus, there is no indication that this novel virus, which we propose to name python endogenous retrovirus (PyERV), is causally linked with BIBD. PMID:12097574
Banzato, Tommaso; Russo, Elisa; Finotti, Luca; Milan, Maria C; Gianesella, Matteo; Zotti, Alessandro
2012-05-01
To determine the ultrasonographic features of the coelomic organs of healthy snakes belonging to the Boidae and Pythonidae families. 16 ball pythons (Python regius; 7 males, 8 females, and 1 sexually immature), 10 Indian rock pythons (Python molurus molurus; 5 males, 4 females, and 1 sexually immature), 12 Python curtus (5 males and 7 females), and 8 boa constrictors (Boa constrictor imperator; 4 males and 4 females). All snakes underwent complete ultrasonographic evaluation of the coelomic cavity; chemical restraint was not necessary. A dorsolateral approach to probe placement was chosen to increase image quality and to avoid injury to the snakes and operators. Qualitative and quantitative observations were recorded. The liver, stomach, gallbladder, pancreas, small and large intestines, kidneys, cloaca, and scent glands were identified in all snakes. The hemipenes were identified in 10 of the 21 (48%) male snakes. The spleen was identified in 5 of the 46 (11%) snakes, and ureters were identified in 6 (13%). In 2 sexually immature snakes, the gonads were not visible. One (2%) snake was gravid, and 7 (15%) had small amounts of free fluid in the coelomic cavity. A significant positive correlation was identified between several measurements (diameter and thickness of scent glands, gastric and pyloric walls, and colonic wall) and body length (snout to vent) and body weight. The study findings can be used as an atlas of the ultrasonographic anatomy of the coelomic cavity in healthy boid snakes. Ultrasonography was reasonably fast to perform and was well tolerated in conscious snakes.
Zarinabad, Niloufar; Meeus, Emma M; Manias, Karen; Foster, Katharine
2018-01-01
Background Advances in magnetic resonance imaging and the introduction of clinical decision support systems has underlined the need for an analysis tool to extract and analyze relevant information from magnetic resonance imaging data to aid decision making, prevent errors, and enhance health care. Objective The aim of this study was to design and develop a modular medical image region of interest analysis tool and repository (MIROR) for automatic processing, classification, evaluation, and representation of advanced magnetic resonance imaging data. Methods The clinical decision support system was developed and evaluated for diffusion-weighted imaging of body tumors in children (cohort of 48 children, with 37 malignant and 11 benign tumors). Mevislab software and Python have been used for the development of MIROR. Regions of interests were drawn around benign and malignant body tumors on different diffusion parametric maps, and extracted information was used to discriminate the malignant tumors from benign tumors. Results Using MIROR, the various histogram parameters derived for each tumor case when compared with the information in the repository provided additional information for tumor characterization and facilitated the discrimination between benign and malignant tumors. Clinical decision support system cross-validation showed high sensitivity and specificity in discriminating between these tumor groups using histogram parameters. Conclusions MIROR, as a diagnostic tool and repository, allowed the interpretation and analysis of magnetic resonance imaging images to be more accessible and comprehensive for clinicians. It aims to increase clinicians’ skillset by introducing newer techniques and up-to-date findings to their repertoire and make information from previous cases available to aid decision making. The modular-based format of the tool allows integration of analyses that are not readily available clinically and streamlines the future developments. PMID:29720361
ERIC Educational Resources Information Center
Ashraf, Rasha
2017-01-01
This article presents Python codes that can be used to extract data from Securities and Exchange Commission (SEC) filings. The Python program web crawls to obtain URL paths for company filings of required reports, such as Form 10-K. The program then performs a textual analysis and counts the number of occurrences of words in the filing that…
On Parallel Software Engineering Education Using Python
ERIC Educational Resources Information Center
Marowka, Ami
2018-01-01
Python is gaining popularity in academia as the preferred language to teach novices serial programming. The syntax of Python is clean, easy, and simple to understand. At the same time, it is a high-level programming language that supports multi programming paradigms such as imperative, functional, and object-oriented. Therefore, by default, it is…
Satiety and eating patterns in two species of constricting snakes.
Nielsen, Torben P; Jacobsen, Magnus W; Wang, Tobias
2011-01-10
Satiety has been studied extensively in mammals, birds and fish but very little information exists on reptiles. Here we investigate time-dependent satiation in two species of constricting snakes, ball pythons (Python regius) and yellow anacondas (Eunectes notaeus). Satiation was shown to depend on both fasting time and prey size. In the ball pythons fed with mice of a relative prey mass RPM (mass of the prey/mass of the snake×100) of 15%, we observed a satiety response that developed between 6 and 12h after feeding, but after 24h pythons regained their appetite. With an RPM of 10% the pythons kept eating throughout the experiment. The anacondas showed a non-significant tendency for satiety to develop between 6 and 12h after ingesting a prey of 20% RPM. Unlike pythons, anacondas remained satiated after 24h. Handling time (from strike until prey swallowed) increased with RPM. We also found a significant decrease in handling time between the first and the second prey and a positive correlation between handling time and the mass of the snake. 2010 Elsevier Inc. All rights reserved.
Homing of invasive Burmese pythons in South Florida: evidence for map and compass senses in snakes
Pittman, Shannon E.; Hart, Kristen M.; Cherkiss, Michael S.; Snow, Ray W.; Fujisaki, Ikuko; Smith, Brian J.; Mazzotti, Frank J.; Dorcas, Michael E.
2014-01-01
Navigational ability is a critical component of an animal's spatial ecology and may influence the invasive potential of species. Burmese pythons (Python molurus bivittatus) are apex predators invasive to South Florida. We tracked the movements of 12 adult Burmese pythons in Everglades National Park, six of which were translocated 21–36 km from their capture locations. Translocated snakes oriented movement homeward relative to the capture location, and five of six snakes returned to within 5 km of the original capture location. Translocated snakes moved straighter and faster than control snakes and displayed movement path structure indicative of oriented movement. This study provides evidence that Burmese pythons have navigational map and compass senses and has implications for predictions of spatial spread and impacts as well as our understanding of reptile cognitive abilities. PMID:24647727
Homing of invasive Burmese pythons in South Florida: evidence for map and compass senses in snakes
Pittman, Shannon E.; Hart, Kristen M.; Cherkiss, Michael S.; Snow, Ray W.; Fujisaki, Ikuko; Mazzotti, Frank J.; Dorcas, Michael E.
2014-01-01
Navigational ability is a critical component of an animal's spatial ecology and may influence the invasive potential of species. Burmese pythons (Python molurus bivittatus) are apex predators invasive to South Florida. We tracked the movements of 12 adult Burmese pythons in Everglades National Park, six of which were translocated 21–36 km from their capture locations. Translocated snakes oriented movement homeward relative to the capture location, and five of six snakes returned to within 5 km of the original capture location. Translocated snakes moved straighter and faster than control snakes and displayed movement path structure indicative of oriented movement. This study provides evidence that Burmese pythons have navigational map and compass senses and has implications for predictions of spatial spread and impacts as well as our understanding of reptile cognitive abilities.
Emer, Sherri A; Mora, Cordula V; Harvey, Mark T; Grace, Michael S
2015-01-01
Large pythons and boas comprise a group of animals whose anatomy and physiology are very different from traditional mammalian, avian and other reptilian models typically used in operant conditioning. In the current study, investigators used a modified shaping procedure involving successive approximations to train wild Burmese pythons (Python molurus bivitattus) to approach and depress an illuminated push button in order to gain access to a food reward. Results show that these large, wild snakes can be trained to accept extremely small food items, associate a stimulus with such rewards via operant conditioning and perform a contingent operant response to gain access to a food reward. The shaping procedure produced robust responses and provides a mechanism for investigating complex behavioral phenomena in massive snakes that are rarely studied in learning research.
NASA Astrophysics Data System (ADS)
Kryuchkov, D. I.; Zalazinsky, A. G.
2017-12-01
Mathematical models and a hybrid modeling system are developed for the implementation of the experimental-calculation method for the engineering analysis and optimization of the plastic deformation of inhomogeneous materials with the purpose of improving metal-forming processes and machines. The created software solution integrates Abaqus/CAE, a subroutine for mathematical data processing, with the use of Python libraries and the knowledge base. Practical application of the software solution is exemplified by modeling the process of extrusion of a bimetallic billet. The results of the engineering analysis and optimization of the extrusion process are shown, the material damage being monitored.
Abba, Yusuf; Ilyasu, Yusuf Maina; Noordin, Mustapha Mohamed
2017-07-01
Captivity of non-venomous snakes such as python and boa are common in zoos, aquariums and as pets in households. Poor captivity conditions expose these reptiles to numerous pathogens which may result in disease conditions. The purpose of this study was to investigate the common bacteria isolated from necropsied captive snakes in Malaysia over a five year period. A total of 27 snake carcasses presented for necropsy at the Universiti Putra Malaysia (UPM) were used in this survey. Samples were aseptically obtained at necropsy from different organs/tissues (lung, liver, heart, kindey, oesophagus, lymph node, stomach, spinal cord, spleen, intestine) and cultured onto 5% blood and McConkey agar, respectively. Gram staining, morphological evaluation and biochemical test such as oxidase, catalase and coagulase were used to tentatively identify the presumptive bacterial isolates. Pythons had the highest number of cases (81.3%) followed by anaconda (14.8%) and boa (3.7%). Mixed infection accounted for 81.5% in all snakes and was highest in pythons (63%). However, single infection was only observed in pythons (18.5%). A total of 82.7%, 95.4% and 100% of the bacterial isolates from python, anaconda and boa, respectively were gram negative. Aeromonas spp was the most frequently isolated bacteria in pythons and anaconda with incidences of 25 (18%) and 8 (36.6%) with no difference (p > 0.05) in incidence, respectively, while Salmonella spp was the most frequently isolated in boa and significantly higher (p < 0.05) than in python and anaconda. Bacteria species were most frequently isolated from the kidney of pythons 35 (25.2%), intestines of anacondas 11 (50%) and stomach of boa 3 (30%). This study showed that captive pythons harbored more bacterial species than anaconda or boa. Most of the bacterial species isolated from these snakes have public health importance and have been incriminated in human infections worldwide. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sandner, Raimar; Vukics, András
2014-09-01
The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.
QuTiP: An open-source Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2012-08-01
We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.
pymzML--Python module for high-throughput bioinformatics on mass spectrometry data.
Bald, Till; Barth, Johannes; Niehues, Anna; Specht, Michael; Hippler, Michael; Fufezan, Christian
2012-04-01
pymzML is an extension to Python that offers (i) an easy access to mass spectrometry (MS) data that allows the rapid development of tools, (ii) a very fast parser for mzML data, the standard data format in MS and (iii) a set of functions to compare or handle spectra. pymzML requires Python2.6.5+ and is fully compatible with Python3. The module is freely available on http://pymzml.github.com or pypi, is published under LGPL license and requires no additional modules to be installed. christian@fufezan.net.
Myiasis by Megaselia scalaris (Diptera: Phoridae) in a python affected by pulmonitis.
Vanin, S; Mazzariol, S; Menandro, M L; Lafisca, A; Turchetto, M
2013-01-01
Myiases are caused by the presence of maggots in vertebrate tissues and organs. Myiases have been studied widely in humans, farm animals, and pets, whereas reports of myiasis in reptiles are scarce. We describe a case of myiasis caused by the Megaselia scalaris (Loew) in an Indian python (Python molurus bivittatus, Kuhl) (Ophida: Boidae). The python, 15 yr old, born and reared in a terrarium in the mainland of Venice (Italy), was affected by diffuse, purulent pneumonia caused by Burkholderia cepacia. The severe infestation of maggots found in the lungs during an autopsy indicated at a myiasis.
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Cox, Christian L; Secor, Stephen M
2007-12-01
We explored meal size and clutch (i.e., genetic) effects on the relative proportion of ingested energy that is absorbed by the gut (apparent digestive efficiency), becomes available for metabolism and growth (apparent assimilation efficiency), and is used for growth (production efficiency) for juvenile Burmese pythons (Python molurus). Sibling pythons were fed rodent meals equaling 15%, 25%, and 35% of their body mass and individuals from five different clutches were fed rodent meals equaling 25% of their body mass. For each of 11-12 consecutive feeding trials, python body mass was recorded and feces and urate of each snake was collected, dried, and weighed. Energy contents of meals (mice and rats), feces, urate, and pythons were determined using bomb calorimetry. For siblings fed three different meal sizes, growth rate increased with larger meals, but there was no significant variation among the meal sizes for any of the calculated energy efficiencies. Among the three meal sizes, apparent digestive efficiency, apparent assimilation efficiency, and production efficiency averaged 91.0%, 84.7%, and 40.7%, respectively. In contrast, each of these energy efficiencies varied significantly among the five different clutches. Among these clutches production efficiency was negatively correlated with standard metabolic rate (SMR). Clutches containing individuals with low SMR were therefore able to allocate more of ingested energy into growth.
McFadden, Michael S; Bennett, R Avery; Reavill, Drury R; Ragetly, Guillaume R; Clark-Price, Stuart C
2011-09-15
To assess the clinical differences between induction of anesthesia in ball pythons with intracardiac administration of propofol and induction with isoflurane in oxygen and to assess the histologic findings over time in hearts following intracardiac administration of propofol. Prospective randomized study. 30 hatchling ball pythons (Python regius). Anesthesia was induced with intracardiac administration of propofol (10 mg/kg [4.5 mg/lb]) in 18 ball pythons and with 5% isoflurane in oxygen in 12 ball pythons. Induction time, time of anesthesia, and recovery time were recorded. Hearts from snakes receiving intracardiac administration of propofol were evaluated histologically 3, 7, 14, 30, and 60 days following propofol administration. Induction time with intracardiac administration of propofol was significantly shorter than induction time with 5% isoflurane in oxygen. No significant differences were found in total anesthesia time. Recovery following intracardiac administration of propofol was significantly longer than recovery following induction of anesthesia with isoflurane in oxygen. Heart tissue evaluated histologically at 3, 7, and 14 days following intracardiac administration of propofol had mild inflammatory changes, and no histopathologic lesions were seen 30 and 60 days following propofol administration. Intracardiac injection of propofol in snakes is safe and provides a rapid induction of anesthesia but leads to prolonged recovery, compared with that following induction with isoflurane. Histopathologic lesions in heart tissues following intracardiac injection of propofol were mild and resolved after 14 days.
Dorcas, Michael E.; Wilson, John D.; Reed, Robert N.; Snow, Ray W.; Rochford, Michael R.; Miller, Melissa A.; Meshaka, Walter E.; Andreadis, Paul T.; Mazzotti, Frank J.; Romagosa, Christina M.; Hart, Kristen M.
2012-01-01
Invasive species represent a significant threat to global biodiversity and a substantial economic burden. Burmese pythons, giant constricting snakes native to Asia, now are found throughout much of southern Florida, including all of Everglades National Park (ENP). Pythons have increased dramatically in both abundance and geographic range since 2000 and consume a wide variety of mammals and birds. Here we report severe apparent declines in mammal populations that coincide temporally and spatially with the proliferation of pythons in ENP. Before 2000, mammals were encountered frequently during nocturnal road surveys within ENP. In contrast, road surveys totaling 56,971 km from 2003–2011 documented a 99.3% decrease in the frequency of raccoon observations, decreases of 98.9% and 87.5% for opossum and bobcat observations, respectively, and failed to detect rabbits. Road surveys also revealed that these species are more common in areas where pythons have been discovered only recently and are most abundant outside the python's current introduced range. These findings suggest that predation by pythons has resulted in dramatic declines in mammals within ENP and that introduced apex predators, such as giant constrictors, can exert significant top-down pressure on prey populations. Severe declines in easily observed and/or common mammals, such as raccoons and bobcats, bode poorly for species of conservation concern, which often are more difficult to sample and occur at lower densities.
pyMOOGi - python wrapper for MOOG
NASA Astrophysics Data System (ADS)
Adamow, Monika M.
2017-06-01
pyMOOGi is a python wrapper for MOOG. It allows to use MOOG in a classical, interactive way, but with all graphics handled by python libraries. Some MOOG features have been redesigned, like plotting with abfind driver. Also, new funtions have been added, like automatic rescaling of stellar spectrum for synth driver. pyMOOGi is an open source project.
Extending Supernova Spectral Templates for Next Generation Space Telescope Observations
NASA Astrophysics Data System (ADS)
Roberts-Pierel, Justin; Rodney, Steven A.; Steven Rodney
2018-01-01
Widely used empirical supernova (SN) Spectral Energy Distributions (SEDs) have not historically extended meaningfully into the ultraviolet (UV), or the infrared (IR). However, both are critical for current and future aspects of SN research including UV spectra as probes of poorly understood SN Ia physical properties, and expanding our view of the universe with high-redshift James Webb Space Telescope (JWST) IR observations. We therefore present a comprehensive set of SN SED templates that have been extended into the UV and IR, as well as an open-source software package written in Python that enables a user to generate their own extrapolated SEDs. We have taken a sampling of core-collapse (CC) and Type Ia SNe to get a time-dependent distribution of UV and IR colors (U-B,r’-[JHK]), and then generated color curves are used to extrapolate SEDs into the UV and IR. The SED extrapolation process is now easily duplicated using a user’s own data and parameters via our open-source Python package: SNSEDextend. This work develops the tools necessary to explore the JWST’s ability to discriminate between CC and Type Ia SNe, as well as provides a repository of SN SEDs that will be invaluable to future JWST and WFIRST SN studies.
Shack-Hartmann wavefront sensor using a Raspberry Pi embedded system
NASA Astrophysics Data System (ADS)
Contreras-Martinez, Ramiro; Garduño-Mejía, Jesús; Rosete-Aguilar, Martha; Román-Moreno, Carlos J.
2017-05-01
In this work we present the design and manufacture of a compact Shack-Hartmann wavefront sensor using a Raspberry Pi and a microlens array. The main goal of this sensor is to recover the wavefront of a laser beam and to characterize its spatial phase using a simple and compact Raspberry Pi and the Raspberry Pi embedded camera. The recovery algorithm is based on a modified version of the Southwell method and was written in Python as well as its user interface. Experimental results and reconstructed wavefronts are presented.
The Gemini Recipe System: a dynamic workflow for automated data reduction
NASA Astrophysics Data System (ADS)
Labrie, Kathleen; Allen, Craig; Hirst, Paul; Holt, Jennifer; Allen, River; Dement, Kaniela
2010-07-01
Gemini's next generation data reduction software suite aims to offer greater automation of the data reduction process without compromising the flexibility required by science programs using advanced or unusual observing strategies. The Recipe System is central to our new data reduction software. Developed in Python, it facilitates near-real time processing for data quality assessment, and both on- and off-line science quality processing. The Recipe System can be run as a standalone application or as the data processing core of an automatic pipeline. The data reduction process is defined in a Recipe written in a science (as opposed to computer) oriented language, and consists of a sequence of data reduction steps, called Primitives, which are written in Python and can be launched from the PyRAF user interface by users wishing to use them interactively for more hands-on optimization of the data reduction process. The fact that the same processing Primitives can be run within both the pipeline context and interactively in a PyRAF session is an important strength of the Recipe System. The Recipe System offers dynamic flow control allowing for decisions regarding processing and calibration to be made automatically, based on the pixel and the metadata properties of the dataset at the stage in processing where the decision is being made, and the context in which the processing is being carried out. Processing history and provenance recording are provided by the AstroData middleware, which also offers header abstraction and data type recognition to facilitate the development of instrument-agnostic processing routines.
Ujvari, Beata; Madsen, Thomas
2009-10-16
Telomere length (TL) has been found to be associated with life span in birds and humans. However, other studies have demonstrated that TL does not affect survival among old humans. Furthermore, replicative senescence has been shown to be induced by changes in the protected status of the telomeres rather than the loss of TL. In the present study we explore whether age- and sex-specific telomere dynamics affect life span in a long-lived snake, the water python (Liasis fuscus). Erythrocyte TL was measured using the Telo TAGGG TL Assay Kit (Roche). In contrast to other vertebrates, TL of hatchling pythons was significantly shorter than that of older snakes. However, during their first year of life hatchling TL increased substantially. While TL of older snakes decreased with age, we did not observe any correlation between TL and age in cross-sectional sampling. In older snakes, female TL was longer than that of males. When using recapture as a proxy for survival, our results do not support that longer telomeres resulted in an increased water python survival/longevity. In fish high telomerase activity has been observed in somatic cells exhibiting high proliferation rates. Hatchling pythons show similar high somatic cell proliferation rates. Thus, the increase in TL of this group may have been caused by increased telomerase activity. In older humans female TL is longer than that of males. This has been suggested to be caused by high estrogen levels that stimulate increased telomerase activity. Thus, high estrogen levels may also have caused the longer telomeres in female pythons. The lack of correlation between TL and age among old snakes and the fact that longer telomeres did not appear to affect python survival do not support that erythrocyte telomere dynamics has a major impact on water python longevity.
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2008-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085
Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2009-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
A Flexible Component based Access Control Architecture for OPeNDAP Services
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Ananthakrishnan, Rachana; Cinquini, Luca; Lawrence, Bryan; Pascoe, Stephen; Siebenlist, Frank
2010-05-01
Network data access services such as OPeNDAP enable widespread access to data across user communities. However, without ready means to restrict access to data for such services, data providers and data owners are constrained from making their data more widely available. Even with such capability, the range of different security technologies available can make interoperability between services and user client tools a challenge. OPeNDAP is a key data access service in the infrastructure under development to support the CMIP5 (Couple Model Intercomparison Project Phase 5). The work is being carried out as part of an international collaboration including the US Earth System Grid and Curator projects and the EU funded IS-ENES and Metafor projects. This infrastructure will bring together Petabytes of climate model data and associated metadata from over twenty modelling centres around the world in a federation with a core archive mirrored at three data centres. A security system is needed to meet the requirements of organisations responsible for model data including the ability to restrict data access to registered users, keep them up to date with changes to data and services, audit access and protect finite computing resources. Individual organisations have existing tools and services such as OPeNDAP with which users in the climate research community are already familiar. The security system should overlay access control in a way which maintains the usability and ease of access to these services. The BADC (British Atmospheric Data Centre) has been working in collaboration with the Earth System Grid development team and partner organisations to develop the security architecture. OpenID and MyProxy were selected at an early stage in the ESG project to provide single sign-on capability across the federation of participating organisations. Building on the existing OPeNDAP specification an architecture based on pluggable server side components has been developed at the BADC. These components filter requests to the service they protect and apply the required authentication and authorisation schemes. Filters have been developed for OpenID and SSL client based authentication. The latter enabling access with MyProxy issued credentials. By preserving a clear separation between the security and application functionality, multiple authentication technologies may be supported without the need for modification to the underlying OPeNDAP application. The software has been developed in the Python programming language securing the Python based OPeNDAP implementation, PyDAP. This utilises the Python WSGI (Web Server Gateway Interface) specification to create distinct security filter components. Work is also currently underway to develop a parallel Java based filter implementation to secure the THREDDS Data Server. Whilst the ability to apply this flexible approach to the server side security layer is important, the development of compatible client software is vital to the take up of these services across a wide user base. To date PyDAP and wget based clients have been tested and work is planned to integrate the required security interface into the netCDF API. This forms part of ongoing collaboration with the OPeNDAP user and development community to ensure interoperability.
Pybus -- A Python Software Bus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavrijsen, Wim T.L.P.
2004-10-14
A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer of component support. This functionality can be presented to the developer in the form of a module, making it very easy to use. This paper describes a Pythonmodule, PyBus, with which the conceptmore » of a ''software bus'' can be realized in Python. It demonstrates, within the context of the ATLAS software framework Athena, how PyBus can be used for the installation and (run-time) configuration of software, not necessarily Python modules, from a Python application in a way that is transparent to the end-user.« less
Saccular lung cannulation in a ball python (Python regius) to treat a tracheal obstruction.
Myers, Debbie A; Wellehan, James F X; Isaza, Ramiro
2009-03-01
An adult male ball python (Python regius) presented in a state of severe dyspnea characterized by open-mouth breathing and vertical positioning of the head and neck. The animal had copious discharge in the tracheal lumen acting as an obstruction. A tube was placed through the body wall into the caudal saccular aspect of the lung to allow the animal to breathe while treatment was initiated. The ball python's dyspnea immediately improved. Diagnostics confirmed a bacterial respiratory infection with predominantly Providencia rettgeri. The saccular lung (air sac) tube was removed after 13 days. Pulmonary endoscopy before closure showed minimal damage with a small amount of hemorrhage in the surrounding muscle tissue. Respiratory disease is a common occurrence in captive snakes and can be associated with significant morbidity and mortality. Saccular lung cannulation is a relatively simple procedure that can alleviate tracheal narrowing or obstruction, similar to air sac cannulation in birds.
STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.
Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik
2012-05-10
Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/